top of page

AI Noise and Manipulation Feared as a Potential Threat

Yesterday’s AI generated graphic which claimed an explosion had happened around the Pentagon in Washington D.C. sent equity indices into a brief selloff mode. However, the graphic was soon proven to be false news as people in Washington confirmed there had been no explosion.


AI has the capacity to cause surprise storms if some people try to trigger manipulation in the financial world and elsewhere by using ‘false’ data and graphics. ‘Bad actors’ within A.I will likely be compared to ‘ransomware’ folks in the world of high-tech, and people and institutions will have to react quickly to distinguish between fact and fiction. The ability of AI to manipulate the markets yesterday is only the beginning and we need to be prepared for more stories like the Washington D.C fake.

AI Mania is Building in the Media and People are Concerned about Wrong ‘Facts’


AI machine learning is coded by people and some of them are prone to bias, which raises the specter of bad input being used in systems that serve the public and clients in an ill-fated manner. Putting all of our trust into an AI system is wrong minded, just as we do not put all of our trust into Wikipedia information, and are aware facts should be checked on within a variety of sources.


Yesterday’s deep fake AI graphic highlights the need for financial markets to discern in a timely fashion attempts to manipulate narrative. Certainly some traders got hurt during yesterday’s reaction to the false report of an explosion in Washington. The dishonest graphic made instant news globally, and social media gadflies raced to report ‘the explosion’ and then had to quickly say they had been tricked. Data bias in AI is just as problematic and perhaps more dangerous, because what is presented as facts will always have to be given critical consideration by its users.


The prospect of bias producing arrogant AI systems ‘tools’ full of hubris as they assert ‘truth’ could develop and create self-perpetuating machines full of wrong details. This could happen as AI searches the internet for information and relies on data that is poor, and uses statistics from its own system posted elsewhere which could manifest falsehoods. The prospect of AI using its own potentially bad coding, and previous input distributed into other information networks in theory could lead to stubborn ecosystems which insists that they are correct, when they are actually not accurate.


Middle of the Road Results will make Users Choose Direction Sometimes


Public AI systems tend to frequently deliver ‘middle of the road’ result aggregates so they do not offend, leaving the users with mixed insights and without a firm stance. Perhaps if users understand this circumstance it can be perceived as a good outcome, because the person will have to do their own critical thinking while choosing direction. There is a danger that politically correct thinking which is coded into AI could lead to more vanilla and less flavor. The fear of offending people with facts may become a danger for AI, and coders will have to decide how to program searches as they produce objective and subjective outcomes.


Learning has changed as the internet has grown more robust with ‘facts'. Students often do not feel it is necessary to master particulars by reading a range of books. Instead they tend to rely on their mobile phones and laptops for their knowledge, avoiding in depth study on their own which would offer more insights and create critical thinking. This can and does lead to the use of ‘expertise’ produced by the internet which is incorrect.


Let’s also consider the notion that public use of artificial intelligence has won a large amount of publicity in the past year, but machine learning capabilities have in fact been used for a long time. The media has done a fairly good job of stirring the masses into a furor, and solid marketing has led to AI being the center of conversation the past handful of months.


AI is far from perfect because it is being built by flawed humans. In 1952 IBM via Arthur Samuel built a program allowing a computer to play checkers and learn how to improve its outcomes through ‘play’. In 1997 an IBM system called Deep Blue beat world chess champion Gary Kasparov in a six game match. The 45 year gap should be noted as we contemplate how AI will develop in the future.

Comments


bottom of page