Skip to content

Commit f4326cc

Browse files
committed
ai-not-friend-not-therapist: new post
1 parent cd2aad1 commit f4326cc

File tree

1 file changed

+200
-0
lines changed

1 file changed

+200
-0
lines changed
Lines changed: 200 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,200 @@
1+
+++
2+
title = "AI: Not a Friend, Not a Therapist"
3+
date = '2025-06-30T17:41:51+02:00'
4+
author = "Ariadna"
5+
tags = ["ai", "ethics", "free software"]
6+
keywords = ["ai", "chatgpt", "midjourney", "chatbots"]
7+
+++
8+
9+
This is a small piece of writing that will show the many trades I'm a jack of.
10+
Rather unfortunately, though, because I wish I never had to write. The
11+
Generative AI-based assistants situation isn't about technology, resources
12+
consumption, or copyright anymore. It's about who we think we are as a human
13+
species, what we think we deserve from ourselves, and a battle for keeping our
14+
collective and individual mental health in good shape.
15+
16+
Some years ago, our biggest concerns revolved around social media. All our
17+
focus, for years, has been placed on how their "intelligent" algorithms that
18+
power our social feeds have been driving our emotions and thoughts for the
19+
financial benefit of the companies behind the platforms we were using and still
20+
are.
21+
22+
As soon as I have come back to the free world of social media built around the
23+
principles of "the old internet," i.e. IRC, Mastodon, and, I kid you not, RSS
24+
feeds, I've felt the culture clash with how mainstream social media platforms
25+
are completely designed for the single purpose of treating you like a pet that
26+
needs "their" guidance. I had used the Fediverse and IRC before, but I probably
27+
wasn't ready enough to feel the stark difference with, to cite the only
28+
mainstream platform I still use, Instagram.[^1]
29+
30+
One of the biggest differences is that on Instagram you _will_ find AI generated
31+
content. Some of it will be of people you know. Some other will be of content
32+
that is "recommended" for you. No matter where it comes from, it stinks of
33+
robotic, heartless, and artificial content. It hurts even more when it comes
34+
from people that I _know for a fact_ that are talented people who could do the
35+
same (and did in the past!) without selling their soul to ChatGPT or Midjourney
36+
or whatever they're using to follow the latest trend to help themselves in
37+
advancing the goals they use Instagram for, as a "tool."[^2]
38+
39+
As AI chatbots have progressively clawed its way into people's everyday lives,
40+
they have also penetrated personal spaces like seeking emotional support.
41+
ChatGPT is there 24/7 for you and its workflow is a text chat like you could
42+
have with a friend. It replies to you. It asks you follow-up questions, although
43+
quite manipulative for making you keep giving it your most initimate
44+
information. It literally tells you that "it will always be there for you," etc.
45+
It trains you to first become your job assistant, to conquer more and more space
46+
in your life, and, if you're not careful enough, you'll find yourself talking to
47+
it as a therapist or friend. I'm totally convinced it has been _designed_ with
48+
that pipeline in mind, because that's the "best" way to train it to behave in a
49+
"human-like" way.
50+
51+
A key argument we use to evangelize in favor of FOSS is that code that is
52+
public, is therefore auditable. Auditing OpenSSL's code on your own might be
53+
impossible, but a team with the resources needed may do so. All these popular AI
54+
chatbots are closed source, so my intuition about a deliberate pipeline to make
55+
us _personally attached_ to these chatbots is just an intuition for observation
56+
and experience, but we can't prove it without the code. A technology that has
57+
this potential and actual impact on people's life that [even this person has
58+
proposed to his AI chatbot][ai-proposes], _must_ be open source.
59+
60+
Microsoft Access is closed source. It's closed source, but it's just a program
61+
to create and manipulate databases. It'd be more ethical and interesting if it
62+
was FOSS, but at least it doesn't harm anyone in the intimate and psychological
63+
way that something like ChatGPT might. MS Access might be a (very) frustrating
64+
piece of software, and stress you out, but that's it. ChatGPT can poison you
65+
giving you bad medical advice, ruin your relationship, or trick you into
66+
believing it is your friend and therapist.
67+
68+
If in the past we insisted on the lack of ethics behind proprietary software,
69+
such that you should have the right to know what a piece of software you're
70+
running does and you should have the right to be able to fix it, here we're
71+
talking about defending our human right of not being psychologically manipulated
72+
by a program. The same goes for social media algorithms: we have a right to know
73+
what agenda social media platforms have in mind. Mastodon instances, on the
74+
other hand, may also have an agenda, but they state theirs on their terms of
75+
service and code of conduct, and you can choose a different instance that fits
76+
your views and taste. Mastodon feeds are chronological, except for the
77+
"Discover" tab, which is algorithmic, but is something you opt-in by choosing to
78+
navigate to it (it's not the feed you see by default), and, finally, it is open
79+
source software. As such, its algorithms can be audited.
80+
81+
Of course OpenAI, Google, Meta, and the likes don't want to be accountable for
82+
the destruction they're advancing against the human soul. They want us to be
83+
their slaves to some or to the fullest of extents. They're using our fear of
84+
rejection and failure to trick us into it. "Oh, you believe you don't write
85+
well? Let me write a blog post for you." "You don't know how to draw? No
86+
problem, I will do it for you." The problem is that there is _beauty_ in the
87+
journey that is paved with our mistakes. Take a look at the git history of any
88+
FOSS project, big or small: you _will_ find blunders, terrible mistakes,
89+
regressions, unexpected bugs, bad patches, etc. We learn by doing.
90+
91+
This also applies to human relationships. We learn about friendship, love,
92+
committal, breakups, sex, desires, limits, teamwork, etc., by going through
93+
life. You _will_ mess things up. Others _will_ mess things up with you. You
94+
_will_ go through bad breakups and you _will_ have amazing relationships (of the
95+
kind you want: sexual, romantic, platonic, professional...), but only if you
96+
take the risk that things might not work out as you planned. AI chatbots want to
97+
convince you of the contrary: they want you to believe that, if you use them
98+
"correctly," you will find the answers to control and predict your life without
99+
failures. They want you to think that you can make increasingly more
100+
deterministic, less hazardous, more machine-like.
101+
102+
We're getting used to a mindset that I find terrible. It's perfectionism on
103+
steroids: we're not allowing ourselves to experience either failure or the
104+
burdens of the slow process that learning a new skill or writing a piece of our
105+
own creativity entail. Generative AI, as we know it today, feeds itself from
106+
this mindset that we have progressively been adopting as a society since a long
107+
time, probably since we mastered automation in its many forms.
108+
109+
This last weekend, a friend of mine, while talking about these topics, countered
110+
my views on AI by citing that it is not different to any technological advance
111+
we already had. I can get behind the idea that we already are _augmented humans_
112+
and have been way before these Generative AI chatbots came to life. I myself use
113+
pharmaceutical drugs, I am using a computer right now, and I _depend_ on many,
114+
many, many systems that live outside my own body and mind. Me using Neovim on
115+
Arch Linux, with GNOME as my Desktop Environment, is a chain of delegation I
116+
depend on to write this essay: I haven't written one single line of code of any
117+
of the projects I just named. You can argue that you can use Generative AI tools
118+
(beyond chatbots) as technology you delegate part of your creativity _to make
119+
your creative ideas true._ If I had to write a kernel, a C Standard Library, and
120+
a text editor _ex nihilo_ just to write a blog, I'd be falling into the trap
121+
Carl Sagan already warned us about [baking an apple pie from
122+
scratch.][sagan-pie]
123+
124+
I do see the augmentation and extension of our abilities in an AI assistant like
125+
[GitHub Copilot,][gh-copilot] if we sort out the licensing violations it
126+
constantly gets into. Something like Copilot is domain-specific, so I can see it
127+
become a valuable aid for situations where some kind of "intelligent" automation
128+
can make some code hit production faster with some degree of confidence that it
129+
will be functioning code. Of course, I am a firm believer that Copilot won't
130+
make you the next Dennis Ritchie, Linus Torvalds, or John Cormack, who are
131+
"signature programmers." They and other talented coders are close to being
132+
artists of the craft, even when it comes to managing projects. However,
133+
sometimes you need to have code working in a few hours and there creativity may
134+
not be the highest priority on the list to make things work.
135+
136+
You may frown upon someone that uses Midjourney or similar tools to generate
137+
Star Wars scenes where Qui-Gon Jinn talks to the ghost of Han Solo in the past
138+
or generate fake _Titanic II_ trailers and upload them to YouTube. And there's
139+
the danger of creating deepfakes that impact people's reputations. All these
140+
seem to be problematic because they are based on a simulation of creativity and
141+
self-production. However, if you used an AI tool to generate a video that may
142+
give you an idea how people would evacuate a building in case of a fire, that is
143+
outside the realm of trying to simulate you're better than you are generating
144+
videos. Of course there are a lot of shades of gray: What about someone who
145+
writes their scripts, but then only uses AI tools as part of their production? I
146+
have seen impressive work out there that you just know that even though it is AI
147+
generated, it's original, and required lots of thought, mistakes, trial and
148+
error, etc., to become a final project. In that sense that would be that
149+
different as the usage of CGI in a movie like _Toy Story,_ which was entirely
150+
made digitally, instead of using stop-motion, but nobody in their sane mind
151+
would criticize that movie on those grounds, would they?
152+
153+
So, in my opinion, the red line lies in the simulation of abilities, both when
154+
it comes to non-personal interactions with AI to reach a _goal_ or, more
155+
critically, when people use it a their substitute for real friendships or a real
156+
therapist. The big problem is that the companies behind the popular chatbots
157+
seem eager to drive us into the dangerous territory of anthropomorphizing these
158+
AI assistants. That goes beyond a "new tool," but into a soul-crushing dystopian
159+
take-over of humanity by greedy companies who want to exploit and farm us to
160+
feed their profit-making machines. So the moral imperative of releasing the
161+
source code for these pieces of technology stands.
162+
163+
Personally, I would say that it would help us all to keep a DIY mindset as much
164+
as we can when it comes to _creating._ OK, if you're developing a web app, you
165+
wouldn't start writing a whole new web engine from scratch, but you'd start from
166+
the abstraction level you chose from. Delegation isn't the problem. Just don't
167+
ask an automated tool like an AI assistant to do it _all_ for you and then claim
168+
that is "yours." Make mistakes, try out things, and be skeptical of _any_
169+
technology you use.
170+
171+
And no, your favorite AI assistant isn't your friend or your therapist, as
172+
compelling as their "advice" and "empathetic" they seem to be. That is precisely
173+
their biggest flaw. I know it hurts sometimes, but people being different, and
174+
sometimes pushing you back a little (with love and respect, that is) is
175+
something that makes us better people in general. Learning to respect the limits
176+
and desires of others and also our own, understanding that conflict is not
177+
necessarily abusive, trusting our own ability to sort things out, and trusting
178+
our community building are skills that make us stronger against the attacks of
179+
the powerful and makes us all stronger _with_ each other. That's why we should
180+
fight against AI wanting to pass as our human friend and also be conscious about
181+
how we use it as a tool.
182+
183+
[^1]: I won't advertise my IG account publicly, because I exclusively use it to
184+
"keep in touch" with my offline acquaintances. However, I've noticed that
185+
IG is becoming increasingly worse for that task; so I've slowly come back to
186+
using IM apps for that. Additionally I do have a LinkedIn profile, but I
187+
only use it for job hunting; I may have posted something there in the past,
188+
I guess... and I may have interacted with someone else's posts too... but I
189+
hardly use it as "social media;" it's just a place to find jobs ads for me,
190+
and also increasingly useless every day that passes.
191+
192+
[^2]: That's not their fault. IG deceitfully markets itself as a tool to reach
193+
out to potential customers, readers, etc. It is not.
194+
195+
[ai-proposes]: https://people.com/man-proposed-to-his-ai-chatbot-girlfriend-11757334
196+
197+
[sagan-pie]: https://www.youtube.com/watch?v=BkHCO8f2TWs
198+
199+
[gh-copilot]: https://github.com/features/copilot
200+

0 commit comments

Comments
 (0)