top of page
Andrew Schlomchik

Tools aren't neutral

Artificial intelligence has gotten good. Really, really good. We’ve had a couple of false starts over the last few decades, moments in which the world was a bit too optimistic about how much A.I. had advanced, but this time, it just might be for real. No, we don’t have a self-aware general artificial intelligence, nor any model that is truly humanlike in nature. However, in just a few years, we have seen a massive jump in the capabilities of neural net-powered deep learning tools that can perform astounding feats of computing.


Although these rapid advances in artificial intelligence give many reasons for optimism, with a vast array of beneficial possibilities for scientific research and medicine, they also bring entirely new dimensions of risk around possible abuse and exploitation by malicious (or unwitting) actors. The age of advanced machine learning tools has arrived, and it will be difficult to manage the host of changes this will bring. For now, most people only have access to artistic tools like OpenAI’s DALL-E 2, an algorithm that uses a user prompt to generate an artificially created, original image. But it is clear that as A.I. technologies continue to advance and see use in more and more applications, the impact on society will only grow more serious.


It isn’t going to take the humanoid, omniscient A.I. described in science fiction to transform our society; we already have many of the algorithmic tools necessary to see profound changes come to pass, and more are on the way. It is imperative that we start thinking about and shaping the technological world we want to live in now before these tools are so widespread that we no longer have that choice. As a society, we need to set out rules and guidelines, as well as expectations and boundaries, before these technologies are used in a way that we don’t want and, more worryingly, barely understand.


Neural networks are the bedrock of recent advances in A.I. Neural networks are loosely modeled after the brain, with one or more layers of nodes (neurons) sandwiched between input and output layers of nodes, connected by innumerable connections (synapses). Similar to in the brain, nodes receive and process signals, and based on the weight and threshold of the given node, send or don’t send the signal on to the next node. This is of course an oversimplification. Here’s a more in-depth explanation, if you are interested.


The long and the short of it is that you can use data sets to train neural networks, gradually increasing their accuracy by adjusting the weights and thresholds of the nodes, until eventually, a deep learning algorithm is produced. Neural nets have become an effective way to create deep learning algorithms. The technology and frameworks that allow the usage of neural networks to create algorithms have also become more open-source, allowing more people, such as researchers, to apply deep learning algorithms to their own work. The success of the neural network approach has compounded upon itself, sparking a massive surge in investment toward researching and developing the technology.


Scientists have been able to do some pretty incredible things using these newly advanced tools. In the field of neuroscience, deep learning enabled researchers to map the roughly one hundred thousand neurons of a fly brain, an achievement that would have once been impossible. Google’s DeepMind used AlphaFold, a deep learning algorithm trained to predict the shape of proteins, to predict some two hundred million proteins, nearly every protein known to scientists. Deep learning algorithms can detect diseases and identify cancers with greater accuracy than most doctors, as well as the highly complicated affair of modeling disease outbreaks, and even the climate. The plethora of positive applications for A.I. learning in science gives me hope for technology. Years of only news about advancements in social media services that don’t really contribute anything to society have jaded my views toward the technological direction of humanity. In some sense, deep learning algorithms are a breath of fresh air. But a tool holds no allegiance to benevolence.


Months ago, before writing this article, I was listening to an episode of Radiolab about a particular algorithm designed to discover medicines for rare diseases. The twist was that one of the creators soon realized that by changing only a couple of parameters for the algorithm, they could instead predict the most powerful chemical weapons known to science. To a certain degree, the piece was alarmist. To predict a chemical is one thing; to create and synthesize a chemical is entirely another matter, as was noted in the podcast. But the episode illustrates an important point; although the intentions of a creator might be benevolent, it is all too easy to conceive of an evil application for any technological tool. The cliche metaphor of a hammer as both a useful tool and a devastating weapon is especially apt in this case.


This doesn’t even cover some of the more banal dangers of these new algorithms, the most important of which is bias. The quality of an algorithm is by and large dependent on the quality of the data used to train it. As the saying goes in the tech world, “Garbage in, garbage out.” Algorithms can have the effect of amplifying the bias held by the creators of the algorithm, or the bias within the data. Should we become too reliant on a particular algorithm, it might in fact make biases within our society yet more acute, as well as disproportionately harm certain groups. Problems of bias in algorithms are already emerging, such as in A.I.’s usage for identifying criminal suspects (you guessed it; many facial recognition algorithms are racist).


Many experts believe that the multitude of possible harmful applications for A.I. necessitates regulation. Perhaps this seems obvious to you, but some in the tech world don’t exactly share your view. Generative AI, the resident cool kid on the twelve-hundred block of A.I. Street, has just been let loose on the masses. DALL-E 2, which I mentioned earlier in this article, is an example of Generative AI, which draws upon a gigantic data set to produce new, synthetic data based on the pattern of the original data. (An excellent piece in the New York Times describes the many questions this capability raises when it comes to producing written material.) Stable Diffusion, an open-source generative A.I. developed by Stability AI, was recently released by Stability AI to the public as an open-source and free-to-use algorithm for the public.


Founder Emad Mostaque justified the act by claiming that it was safer for A.I. to be democratized than dominated by a small number of corporate entities. Mostaque believes that by making AI accessible to the general public, and ensuring transparency around the technology, communities, not corporations, will be able to decide how A.I. is regulated. In other words, Mostaque wants conversations about how A.I. should be used and regulated to take place in the public sphere, not behind closed doors. Mostaque takes the approach of viewing tools as extensions of their creator’s intentions and believes that too much regulation will curb free expression and stunt creativity. Mostaque views the way that large social media companies in the last decade handled their A.I. content algorithms as a cautionary tale about the dangers of too much centralization and opacity. He hopes to forestall such a trend in A.I. by democratizing the technology from the start.


What I believe Mostaque misses is that he has in effect made the choice for us. Rather than first allowing a societal discourse about the extent to which advanced A.I. tools should be available for anyone to use, he has released Stable Diffusion to the public. At the risk of making a slippery slope argument, I would contend that it isn’t at all unreasonable to think that his actions will only encourage more creators to also release their tools to the public, and almost certainly, many of these tools will have far more potential for societal harm. While I agree with Mostaque that I certainly don’t want a small cabal of corporate actors to unilaterally dictate the incorporation of advanced A.I. into society, it’s irresponsible to leave such an important question up to the judgment of just anybody. When you leave the question up to “everyone”, you really leave it up to “every-one”, no matter who that “one” is or what their intentions are.


Governments are supposed to represent the interests of the people, and of society at large, not just the selfish interests of a few. Even as I write this sentence, I chuckle to myself at such a naïve and idealistic vision of government, but at the same time, the idea that simply anyone can use a tool as powerful as a highly advanced deep learning algorithm is terrifying. We do have a say in our government; we don’t individually have a say in a corporation's decisions or one individual's actions. Is it truly a good idea to rely on the hope that everyone will have enough sense of societal responsibility not to abuse A.I.? Government regulators are notoriously slow, and against rapidly accelerating developments in deep learning tools, it’s unlikely that they will act fast enough, if at all, to curb malicious uses of A.I. It will take a determined effort to reign in the coming excesses of a new digital age, but the task is still possible. What I fear most is that such an effort will only be mustered after we have already suffered some catastrophic consequence of delay.


Another factor to consider is natural language processing (NLP). Even if companies don’t release specific, pre-programmed tools, the dissemination of the underlying technology could prove dangerous. Natural language processing is essentially a way to use natural language-the vernacular we use to speak and write in our everyday lives-to program computers. Instead of requiring knowledge of the specific syntax and nuances of coding languages, NLP allows for someone relatively untrained in computer science to program algorithms and create their own tools. The technology is still relatively undeveloped, but as computers have become yet more essential to conducting business, the demand for programmers has driven investment in NLP.


What would the world look like if anyone not only has access to highly advanced deep learning algorithms but can even create specialized algorithms for their own particular use with rudimentary coding knowledge? I don’t know the answer to how exactly we should manage the risks and benefits of deep learning algorithms in our society. Don’t let the doom and gloom of the last few paragraphs make you forget all of the potential benefits of these new A.I. tools; A.I. will revolutionize science. Many of A.I.’s future benefits are hard to imagine. But society needs to start having conversations about how to manage the new powerful capabilities that are entering our arsenals, or else we might not have a chance until it is already too late, and people like Mostaque have already made the decision for us.


The New York Times just came out with a new technology news podcast, Hard Fork, and I highly recommend that all of you take a listen. This episode, which was the inspiration for this article and contains an interview with Mostaque, is especially pertinent to the topic of this article.


Comentários


bottom of page