Artificial Intelligence Has Potential But Still Has A Long Way To Go
From Alexa and Siri to bots that have infiltrated social networks like Facebook and Twitter to surveillance systems to Amazon robots to face swap apps, artificial intelligence (AI) applications have become incredibly mainstream over the past decade or so.
Spurred on by the meteoric rise in internet usage in the early 2000s, artificial intelligence applications look set to become the next wave. Before proceeding to talk more about artificial intelligence including where its shortfalls are, it is important to first understand exactly what it is.
AI is basically a generic collection of concepts that allow computer systems to clone how the brain works. One of these concepts, being neural networks which are mathematical systems for analyzing data and identifying patterns from that data, are most popular in artificial intelligence applications being used nowadays.
From being fed an incredible amount of data, neural networks are able to pinpoint patterns in that data, allowing applications to be developed around that knowledge of the patterns. For example, if you feed a huge amount of puppy pictures to a neural network, eventually it will learn how a puppy looks like and an application for identifying puppies without human assistance can be built around that knowledge of the neural network. With this layman's idea of how neural networks work, one can then have a picture of how its application in things like Alexa and Siri goes about. And herein lies the first issue with AI applications.
To eventually have Alexa work so seamlessly like it does nowadays, huge amounts of data had to be fed to the underlying neural network to allow it to answer and respond to requests as it does. For chatbots which normally offer support on websites to be able to assist you as they do, huge amounts have fed to their neural networks to be able to mimic a help center worker. For Facebook to be able to offer a seamless translation of a post written in French to English, huge amounts of data have had to be fed to the underlying neural networks.
How this data is collected by big tech companies has been an ongoing issue for close to half a decade now. Some of the methods of how they do this data gathering have been described as unethical if not downright illegal and with AI applications growing in popularity and demand for them increasing daily, it looks like tech giants have an even bigger motive to continue their data leeching ways.
Another problem brought about by AI applications' reliance on data is that, especially for internet applications, this data was created by people, some of whom happen to have bigoted and biased views. For example, a few years back, a Google search of "professional hairstyles" yielded results of white people's natural hairstyles while a search of "unprofessional hairstyles" showed mostly black women's natural hair. Of course, this search result was not a result of Google engineers deciding that white hair is more professional than black hair. It was rather an algorithmic bias caused by the underlying neural network being fed this data from bigoted and racist people who hold that view.
Sometimes though, algorithmic bias can be a direct result of the bias by the engineers and developers themselves. Facial recognition software is an example of this in that despite advancement in the technology, it still has only about 50% success rates in correctly recognizing black faces. This can be caused, among other factors, by engineers having used a relatively large amount of white faces to "teach" the software facial recognition than black faces. The consequence is that when in use, the software recognizes white faces more than black faces because that's what it has been taught to do.
These problems tie to a very important concept in computer science called "garbage in, garbage out" (GIGO). Simply put, this concept means that the quality of the output of a computer program will depend on the quality of the input. To apply this to our example of Google's search algorithms giving out bigoted results and facial recognition software not recognizing black faces, it means that because the underlying neural networks were fed bigoted, biased, and racist data, obviously the results outputted by the programs are also going to be bigoted, biased and racist.
Of course, if more black people had access to and were using the internet more to contribute accurate and less stereotypical data to these neural networks, there would be fewer cases of these unfortunate search results and facial recognition software mishaps. This difference in internet access and the consequent data divide is what continues to allow biased narratives to be present and exhibited by the algorithms running AI applications to this day.
As another example, a Google search for "African children" and one for "European children" outputs very different and clearly stereotypical results because there are more people contributing data presenting African children negatively than those contributing data depicting them in a positive light. Unfortunate Google search results are just the tip of the iceberg of the shortcomings of AI applications on the internet. Deep fakes and misinformation bots are another contribution by AI to false narratives being purported on the internet.
There is no denying the good that has been brought about by AI applications. From Alexa and Siri to healthcare, finance, transportation, and education applications, AI has made life more convenient and safer. Despite these welcome contributions, its downsides should not be swept under the rug as is seems the case nowadays.
Artificial intelligence is both an old and also rudimentary concept. It is old in that neural networks have been a computer science concept since the 1950s but was not as successful because of limitations on the amount of data available to allow them to function and the computer processing power needed to analyze data. The internet created vast amounts of data which significantly improved them and computer processing power has developed exponentially over the years.
It is also rudimentary in that, despite these huge leaps made in data collection and computer processing power to support neural networks, artificial intelligence is still very far from being able to do what it was invented to do—mimicking the workings of the human brain. Thinking that just because Alexa can change the music for you and a Tesla can drive itself means artificial intelligence is ready to replace humans is a very naive view.
Before we can get to that point, many issues with AI like its bias, misuse, etc still have to be addressed. Open access to both data and the internet is probably the first and most important step to improving AI. Open access to data in that tech giants shouldn't have a monopoly to most of the data used to build AI applications and hence end up exploiting it in the process and open access to the internet in that people in Africa, for example, should be able to paint a true narrative of themselves which will show in Google searches instead of stereotypical one painted by bigoted people in countries with high internet penetration.
Inclusivity in tech spaces is also another factor that can greatly improve AI. If marginalized people like women and people of color can be involved more in the process of building AI applications instead of the current trend where it's mostly white males who end up intentionally or unintentionally purporting stereotypes, marginalized people can be able to prevent bigoted and biased narratives from being built into these applications from the get-go by identifying these stereotypes and helping to remove them in the applications.
Other issues with AI like deep fakes and bots being used to spread misinformation especially on social media networks also have to be addressed since tech giants are dragging their feet to address them. To do this is going to require the creation of regulatory frameworks that will address these issues, something which is not going to happen overnight.
To get to a point where we have perfect AI applications is going to take a long time so we should not fool ourselves into thinking we have reached the utopia where machine eclipses humanity in intelligence and hence can replace humans. We are still very far from machines being able to do anything humans can do faster and better. Artificial intelligence, it seems, is still not very intelligent.
Great blog ! I am impressed with the suggestions of the author.
ReplyDeleterobotics projects for kids
Embedded Software Development Services play a crucial role in powering the AI revolution. As mentioned in the article, AI technologies rely heavily on the development of efficient, reliable, and scalable embedded software solutions. With the help of skilled developers, companies can create smart, connected devices that leverage the power of AI to enhance user experiences and improve business outcomes.
ReplyDeleteNice content, valuable and wonderful design, as well as sharing good stuff with good thoughts and concepts, loads of fantastic knowledge and inspiration, both of which I need. Thanks for providing such helpful information here. custom erp software
ReplyDelete