Social Work Tech Talk: I, Chatbot—What Does AI Have To Do With Social Work?

0
34



by Gina Griffin, DSW, MSW, LCSW

     Instantly, everyone seems to be speaking about AI. For a blerd comparable to myself, that is bittersweet. Though I’ve handed the half-century mark, people have nonetheless failed to provide a dependable transporter, and even Starfleet Academy. (Although there’s nonetheless time. It’s purported to be based in 2161. I simply have to hold on for some time.) It’s possible that this guidelines out any probability that I may need of serving as a therapist for Starfleet, on a Structure-class starship, like counselor Deanna Troi. Alternatively, we do have AI. And as replete as it’s with chance, it is usually fraught with the potential for misuse and hurt. It’s being deployed in quite a lot of settings, and it already touches many components of the tech that we use each day, comparable to utilizing the Bing search engine, which is AI powered (Ortiz, 2023). So, let’s speak a bit bit about what makes it related to the work that we do as social staff (in Starfleet, or in any other case).

     I’m fairly certain that that AI stands for Synthetic Intelligence. These are computing fashions that mimic human thought. They’re taught by feeding them giant units of information, which helps them to study by expertise, and to perform particular duties (SAS, 2023). These fashions are good and breathtakingly cool.

     One of the seen, and controversial makes use of of AI has been to develop fashions that may mimic or create artwork. Lots relies on who you ask. Among the most well-known artwork turbines are Dall-E, Secure Diffusion, Midjourney, and WonderAI. A “immediate” is entered into the generator to provide a picture. And after a little bit of “thought,” the generator produces a picture primarily based in your request. The photographs are typically wildly distorted; additional limbs and additional fingers are sometimes a sign that you’re a chunk of AI-generated artwork. Nevertheless, most of the items have grow to be very refined and exquisite, because the fashions proceed to evolve. They’ll mimic any model, even of specific artists like Norman Rockwell, and this request turns into a part of the immediate.

     That is controversial for a number of causes. One is that the info units used to feed these fashions are derived from the artwork of residing, respiration artists. Initially, artists weren’t requested in the event that they wish to have their artwork included in these knowledge units. Consequently, the businesses which have developed these fashions are profiting illegally from their work. (Hencz, 2023). This can be a drawback that’s being addressed in a few of the newer engines, and there are fashions the place artwork is now being ethically sourced. For example, Adobe’s Firefly, now in Beta launch, has been educated on knowledge units which might be open license. They’re additionally a part of an initiative that has the intention of serving to artists to guard their work, and to coach the software program with personalised datasets primarily based on their very own model (Adobe, 2023).

     One other concern is that AI-generated artwork is at the moment stuffed with the kinds of biases that always plague new expertise. For example, early AI fashions typically poorly recognized individuals of colour. Black individuals have been typically labeled as “gorillas” by the AI, which is clearly problematic (Mac, 2021). The speculation is that the fashions have been poorly educated, and that not sufficient photographs of Black individuals have been included in order that the fashions may precisely acknowledge us. Nevertheless, it is usually a widely known drawback that folks feed their biases into algorithms, deliberately or in any other case (O’Neill, 2016). So, on the planet of AI, illustration completely issues for the sake of correct interpretation.

     I’ve to pause right here and say that I’m a part of a gaggle of social staff who’ve been enjoying with AI artwork for a while. This was the mind little one of social staff Melanie Sage and Jonathan Singer. We meet a couple of instances per week and work with a typical immediate, then publish the outcomes. Whereas that is enjoyable, and whereas it produces stunning and comical outcomes, we’ve got additionally been made extra conscious of the biases of the software program. For instance, by default, the output picture will at all times be of a White individual in a lot of the main software program. If you want a picture of an individual of colour, you’ll have to particularly ask for this consequence. Asking for an older lady could produce a hag-like creature (though this has been slowly enhancing). And lots of the software program demonstrates biases about what males do and what ladies do. These aren’t small considerations, as it is necessary that the work that we produce ought to precisely replicate the world round us. (If you wish to see what we’ve been as much as, search for #SocialWorkAIArt on the app previously often known as Twitter.)

     For example, for the illustration on the high of this text, I used the next immediate within the WonderAI app: 

An African-American lady with lengthy curly grey hair and glasses stands subsequent to a robotic that has the identical face as herself. They’re each smiling. They’re in a really fashionable spaceship. One among them is carrying road garments. The opposite is dressed like a really fashionable robotic, and it has the identical face as the opposite lady. The robotic is carrying a clipboard.

     I used some variation of this immediate to generate about two dozen footage. Typically, the outcomes have been very far off. I initially needed a picture of the lady’s head pushing out of a monitor and speaking to herself. However WonderAI did not like that in any respect. When that occurs, I nudge the immediate a bit bit to see what it may possibly do. So, I additionally tried Alexa-like machines, as properly. There are numerous filters, and a few of them could reply higher to the immediate than others. This piece made utilizing the “Mystical” filter.

     An extra concern is that these artwork turbines will take work away from flesh and blood artists. It’s simple to see why it is a concern. Artists prepare for a lifetime to good their ability, and now, individuals with no coaching can just about push a button and produce complicated items of artwork. Job displacement is regularly a priority with new expertise. And in lots of conditions, I concern that this would be the case with AI. A number of quick meals chains are already experimenting with chatbots that take orders and robotic arms that flip burgers (Baldwin, 2023). That is seemingly being accomplished due to a labor scarcity. Whereas I consider that is true to some extent, I believe that there’s additionally the monetary motivation to eradicate paying human staff. And I believe that the much less complicated the ability degree, the extra inclined jobs shall be to alternative by AI and automation. This ought to be a priority for social staff, because it signifies that jobs requiring much less ability are more likely to fall by the wayside, and staff will must be educated in different areas. Employees could need assistance organizing into unions, in order that they’ll defend the creation of latest job classes and presumably stop the elimination of outdated ones.

     Within the case of AI artwork, I’ve a idea. It’s a very little-known proven fact that I began out as a dressmaker, after which a graphic designer—my mom paid for years of artwork classes. I used to be working in trend when Photoshop was launched in 1990, and administration needed us to study it as a part of our jobs. I used to be livid. I spent all of that point studying how one can do every part by hand, and a silly machine was going to come back in and mess up every part. I used to be mates with the lead designer, and he or she inspired me to remain and study Photoshop so we may combine it into our work). However I used to be certain that this was going to remove our jobs. What truly occurred is that it actually simply grew to become part of the workflow. We needed to relearn how we did issues; however the work was nonetheless there, and the roles developed.

     I believe that that is what is going to occur with most graphic design. I don’t assume most inventive jobs will go away; I feel they are going to be requested to increase their ability units. Which may be an issue in itself, as job descriptions for one of these work are usually sprawling, and the pay typically doesn’t replicate the years of expertise accrued by a designer. However we are able to see this new sort of ability set emerge in artists like @Stelfie, who combines layers of his personal pictures, Secure Diffusion, and Photoshop to create imaginative “selfies” from a time touring man. And you’ll see the mixing of one of these thought, primarily based on the advertising and marketing screenshots of Adobe Firefly.

So what does this need to do with social work?

     Effectively, I’ve thrown in some clues above relating to variety and the labor market and ethics, generally. Nevertheless, there are some further considerations. 

     One is that AI makes it simpler to create “deep fakes,” or fully AI-generated photographs that appear to be actual. This doesn’t actually require quite a lot of ability or cash, they usually’re already popping up in lots of locations. On one hand, this may make it a lot simpler to proliferate misinformation. Older shoppers could not perceive when they’re viewing a deep pretend, they usually could assume that they’re actual. As there aren’t any actual rules but associated to one of these artwork, there may be little to say the way it can and can’t be used (Bond, 2023). So, educating digital literacy turns into an absolute should.

     Moreover, AI makes it simpler to perpetrate id theft. Whereas some corporations are utilizing measures comparable to voice printing to safeguard shopper accounts, AI has already been used to idiot these programs (Evershed and Taylor, 2023). This expertise has additionally been used to attempt to idiot dad and mom into believing that their little one has been kidnapped, and {that a} ransom is demanded (Karimi, 2023).  So, it’s vital that our purchasers start to grasp how the expertise is evolving and the way we are able to safeguard in opposition to misuse.

     There’s additionally the consideration of AI in medical use, which has combined outcomes. On one hand, in research like one printed within the Journal of the American Medical Affiliation earlier this yr, AI chatbots outperformed physicians in answering affected person questions. The questions have been discovered to have the next high quality content material, and the chatbots demonstrated extra empathy (Ayers, Poliak, & Dredze, 2023). And purchasers have begun to make use of chatbots to complement their psychological healthcare (Basile, 2023). However there may be additionally the darkish facet of AI, and on this case, a Belgian widow believes that her husband’s ongoing discussions with an AI chatbot induced his loss of life by suicide (Landymore, 2023). She states that the chatbot inspired him to die and made it sound affordable. So, though social work psychological well being suppliers could have some competitors from AI expertise, there should still be lots of work to do earlier than one of these expertise is able to carry out by itself.

     And as excessive because it sounds, there may be additionally the chance that AI is evolving a lot too shortly and that it would surpass people (Metz, 2023). A New York Instances article states, “In 2012, Dr. Hinton and two of his graduate college students on the College of Toronto created technology that grew to become the mental basis for the A.I. programs that the tech business’s largest corporations consider is a key to their future” (Metz, 2023). He says that he additionally sees how bad-faith actors can use AI in such a approach that it’ll danger the survival of humanity. He, together with 1,000 different tech leaders, consider that AI can pose “profound dangers to society and humanity” (Metz and Schmidt, 2023).

     Perhaps you’re questioning what a social employee can do. Effectively, there are teams that you may be part of that concentrate on the moral intersection of expertise and social justice. These teams embody husITa and All Tech Is Human and Data for Good. Social employee Laura Nissen and Social Work Futures are imagining a extra simply world into being by setting the stage proper now. And you’ll hang around along with your extra tech-minded social work brothers and sisters on X (previously often known as Twitter) by following #SWTech.

     The world wants extra social work voices to level us towards a safer future with expertise. It’s not fairly my job on the Enterprise D, however it is a good begin.

References

Adobe. (2023). Adobe Firefly: FAQ. https://www.adobe.com/sensei/generative-ai/firefly.html#faqs

Ayers, J.W., Poliak, A., Dredze, M., et al. Evaluating doctor and synthetic intelligence chatbot responses to affected person questions posted to a public social media discussion board. JAMA Intern Med. Revealed on-line April 28, 2023. doi:10.1001/jamainternmed.2023.1838

Baldwin, S. (2023). How robots are serving to handle the fast-food labor scarcity. CNBC. https://www.cnbc.com/2023/01/20/how-fast-food-robots-are-helping-address-the-labor-shortage.html

Basile, L. M. (2023). Can AI exchange therapists? Some sufferers assume in order they flip to Chat GPT. MDLinx. https://www.mdlinx.com/article/can-ai-replace-therapists-some-patients-think-so-as-they-turn-to-chat-gpt/4FzAn1SXlzSUWREEhbblh9

Bond, S. (2023) AI-generated deepfakes are transferring quick. Policymakers cannot sustain. NPR. https://www.npr.org/2023/04/27/1172387911/how-can-people-spot-fake-images-created-by-artificial-intelligence

Evershed, N., and Taylor, J. (2023). AI can idiot voice recognition used to confirm id by Centrelink and Australian tax workplace. https://www.theguardian.com/technology/2023/mar/16/voice-system-used-to-verify-identity-by-centrelink-can-be-fooled-by-ai

Karimi, F. (2023). ‘Mother, these unhealthy males have me’: She believes scammers cloned her daughter’s voice in a pretend kidnapping. CNN.https://www.cnn.com/2023/04/29/us/ai-scam-calls-kidnapping-cec/index.html

Landtmore, F. (2023)l Widow says man died by suicide after speaking to AI chatbot. Futurism. https://futurism.com/widow-says-suicide-chatbot

Metz, C., & Schmidt, G. (2023). Elon Musk and others name for pause on A.I., citing ‘profound dangers to society’. New York Instances. https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html

O’Neill, C. (2016). Weapons of math Ddestruction: How huge knowledge will increase inequality and threatens democracy. New York: Crown Publishers.

Ortiz, S. (2023). ChatGPT vs. Bing AI: Which AI chatbot is best for you? ZDNet. https://www.zdnet.com/article/chatgpt-vs-bing-chat/

SAS Institute. (2023). Synthetic intelligence: What it’s, and why it issues. https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html#:~:text=Artificial%20intelligence%20(AI)%20makes%20it,learning%20and%20natural%20language%20processing.

Dr. Gina Griffin, DSW, MSW, LCSW, is a Licensed Medical Social Employee. In 2012, she accomplished her Grasp of Social Work at College of South Florida. And in 2021, she accomplished her DSW on the College of Southern California. She started to study R Programming for knowledge evaluation with a purpose to develop her research-related expertise. She now teaches programming and knowledge science expertise by way of her web site (A::ISWR) and free Saturday morning #swRk workshops.



LEAVE A REPLY

Please enter your comment!
Please enter your name here