Twitter AI: Researchers at large technology businesses have not had a particularly good year. Research teams, which are hired to assist CEOs in understanding the flaws of platforms, invariably uncover inconvenient truths. Companies hire teams to develop “responsible artificial intelligence,” but they recoil when their employees uncover algorithmic bias.
They brag about the high quality of their internal research, yet they are quick to disavow it when it makes it to the public eye. This story was played out at Google with the forced departure of ethical AI researcher Timnit Gebru and the ensuing ramifications for her team as a result of her actions. It resulted in the arrest of Frances Haugen and the publication of the Facebook Files.
In light of these considerations, it is always noteworthy when a tech platform publishes one of those unpleasant findings for the entire world to see. Twitter accomplished exactly that at the end of October.
An accompanying 27-page report was published alongside Twitter’s blog post on the subject, which goes into greater detail on the study’s findings, research, and methods. It wasn’t the first time this year that the corporation offered empirical backing for hypothetical criticism of its work that had been leveled against it for years. This summer, Twitter held an open competition to identify prejudice in its photo-cropping algorithms, with the winner receiving a prize.
These findings were not kept secret in a closed discussion room where they would never be disclosed. Instead, Rumman Chowdhury, who is responsible for machine learning ethics and accountability at Twitter, presented them publicly at DEF CON, where he thanked the attendees for their assistance in demonstrating the real-world consequences of algorithmic prejudice. The winners received monetary compensation for their contributions.
On the one hand, I don’t want to overestimate Twitter’s courage in this situation. Despite the fact that the results the corporation disclosed have opened it up to some criticism, they are not enough to warrant a full Congressional investigation.
Furthermore, because the company is significantly smaller than Google or Facebook parent Meta, which both service billions of users, anything discovered by its researchers is less likely to spark a global uproar.
Twitter, on the other hand, is under no obligation to engage in this type of public service. And, in the long run, I believe it will strengthen and increase the value of the company in question. Any firm CEO or board member, on the other hand, would find it extremely simple to build a case against doing so.
As a result, I’ve been looking forward to speaking with the team responsible for it. My virtual meeting with Chowdhury and Jutta Williams, the product lead for Chowdhury’s team, took place this week. (Unfortunately, as of October 28, the Twitter team’s official name is Machine Learning Ethics, Transparency, and Accountability: META, which is inconvenient.)
I was interested in learning more about how Twitter is carrying out this work, how it has been welcomed internally, and what the company’s plans are for the future.
Twitter AI Is Banking That Increased Public Input Would Expedite and Improve the Conclusions of Its Research
One of the more unique parts of Twitter’s artificial intelligence ethical research is that the company is compensating outside volunteer researchers to take part in the study. Chowdhury was trained as an ethical hacker, and she has observed that her peers who work in cybersecurity are often able to safeguard networks more quickly and effectively by offering financial incentives to people who assist them.
The tweet was the first time that Chowdhury was actually able to work for an organization that was visible and impactful enough to do this, while also being ambitious enough to fund it, she said. Twitter acquired her artificial intelligence risk management startup a year ago, and Chowdhury joined the company as a result. “It’s difficult to come across something like that.”
According to Chowdhury, it’s often difficult to gain positive feedback from the general population about algorithmic prejudice. A lot of the time, only the loudest voices are heard, and big problems are allowed to fester for the simple reason that affected groups do not have contacts at platforms that can address them.
Other times, concerns are widespread throughout the population, and individual users may not be directly affected by the negative consequences. (Privacy is a common concern in situations like this.)
According to Chowdhury, Twitter’s bias bounty assisted the company in developing a framework to gather and apply customer feedback. After it was discovered that its algorithms disproportionately favored the young, white, and beautiful, the firm has recently announced that it will no longer crop photographs in previews.
There Is No Real Agreement on What Ranking Algorithms “Should” Accomplish or How They Should Behave
Even if Twitter is able to unravel the question of what is causing right-wing content to spread more broadly, it will be unclear what action the firm should do as a result of the discovery.
If, for example, the solution is not found in the algorithm but rather in the behavior of certain accounts, how would you know? If right-wing politicians simply create more comments than left-wing politicians, it is possible that Twitter will not be able to make an obvious intervention in the matter.
According to Chowdhury, “I don’t think anyone wants us to be in the business of forcing people’s voices into submission by some form of social engineering.” “However, we all agree that we do not want unpleasant or toxic content to be amplified, nor do we want unfair political prejudice to be amplified. Consequently, these are all items that I would greatly like if we could begin unpacking.”
She believes that the conversation should be held in public.
— Dan Primack (@danprimack) April 14, 2022
Responsible Artificial Intelligence Is Difficult in Part Because No One Understands the Decisions Made by Computers
Rank-ordering algorithms in social media feeds are probabilistic in nature; they display items based on your likelihood of liking, sharing, or commenting on them. However, there is no single algorithm responsible for making that decision; instead, it is often a mesh of numerous (sometimes dozens) of separate models, each of which makes educated assumptions that are then weighted differently in response to constantly moving inputs.
The fact that there is so much guessing involved in developing “responsible” AI systems is a fundamental reason why it is so difficult to do so confidently. Chowdhury brought out the distinction between working on responsible artificial intelligence and working on cybersecurity.
She explained that in the field of security, it is usually possible to figure out why a system is vulnerable if you can figure out where the attacker gained access to it. However, with responsible AI, simply identifying a problem does not always provide useful information about its origins.
In the case of the company’s research into amplifying right-wing views, for example, this is the case. However, Twitter is convinced that the phenomenon is real, but it can only speculate as to the reasons for its occurrence.
It’s possible that something in the algorithm is at fault. However, it is possible that this is due to user behavior — for example, right-wing politicians may tweet in a way that attracts more comments, which causes their tweets to be weighted more heavily by Twitter’s systems.
According to Williams, who formerly worked at Google and Facebook, “there is a rule of unintended consequences that applies to massive systems.” “It might be a number of various things….”
It’s possible that how we’ve weighted automated recommendations has something to do with it. However, it was never intended to be a consequence of one’s political affiliation. As a result, there is a tremendous amount of research to be done.”
How Do I Create an AI Twitter Bot?
- Make a folder and put the AI code there.
- Take a look at your tweets (hit the button at the bottom to request your archive)
- Tweet sent you a CSV file with some tweets in it that you need to clean up (via Excel or Google Sheets). As an example, I did not look at any tweets that began with “RT.”
- Take a look at Max’s step 2 and follow steps 3 and 4 thereto install Tensor and start training. Clean the tweets and put them into the input.txt file in step 2.
- This is the last step. Let’s connect Twitter to it now, too. I’m following the steps in this article to install Tweepy. Put all the files in the same place.
Now, let’s connect everything.
Twitter Believes That Their Algorithms Can Be Rescued
We might consider shutting down our social media feeds and deleting the code in reaction to the notion that they are unfathomably complex and incapable of being explained by their creators. Congress is now frequently introducing legislation that would make ranking algorithms unlawful, hold platforms legally responsible for the content they recommend or require platforms to provide users with the ability to opt out of using their services.
Twitter’s crew, for one, believes that ranking will continue to exist in the future.
In Williams’ opinion, “the algorithm is something that can be saved.” “It is necessary to comprehend the algorithm. Furthermore, the algorithm’s inputs must be something that everyone can handle and control. “
If all goes according to plan, Twitter will develop exactly that kind of system.
Of course, there is a danger in writing a post like this because, in my experience, teams like this are extremely vulnerable to collapse. An organization can go from being satisfied with its findings and excitedly hiring for it to being withered by attrition in the face of budget constraints, or it can be completely rebuilt in the face of personality disputes or regulatory worries in the blink of an eye. Even if Twitter’s first performance with META is encouraging, the long-term viability of META is not guaranteed.
In the meanwhile, the workload is going to become more difficult. Twitter is currently aggressively working on a project to decentralize its network, which has the potential to insulate portions of the network from its own efforts to create the network in a more responsible manner.
Twitter CEO Jack Dorsey has also expressed interest in creating an “app store for social media algorithms,” which would allow users to have greater control over how their feeds are ordered.
The task of ranking one feed responsibly is difficult enough – attempting to make an entire app store’s worth of algorithms “responsible” will be a far more arduous undertaking.
According to Williams, “I’m not convinced it’s viable for us to get into a marketplace of algorithms right off the off.” “However, I believe it is conceivable for our algorithm to comprehend signals that have been curated by you.
In other words, if someone uses profanity in a tweet, for example, how sensitive are you to that kind of language? Are there any specific terms that you would consider to be really, extremely profane and that you would prefer not to see? How can we provide you with controls that allow you to create your preferences so that that signal can be used in every type of recommendation that you receive?
According to Williams, “I believe that there is a third party of signal more than there is a third party a bunch of algorithms.” “When it comes to algorithms, you have to be very careful about what you put in them.”