post264

India Insider: Concern IT Empire is at Risk in Age of AI

India Insider: Concern IT Empire is at Risk in Age of AI

When China’s DeepSeek announced its Generative AI program as a rival to U.S based ChatGPT, the world paid close attention. In fact, Nasdaq bellwether stock Nvidia, the world’s most valuable company, took a hit because the DeepSeek product was made with less expensive chip processors compared to ChatGPT’s infrastructure, which uses Nvidia’s GPU technology.

In North America and Europe, DeepSeek’s rollout was met with much surprise and intrigue. And the true ‘poster child’ of India’s post-liberalization era, the IT (Information Technology) sector has been facing its own challenges and was also caught off guard. India’s IT sector employs some 5.3 million people and helps maintain its current account balance sheet by earning crucial foreign exchange reserves. The top four major IT companies have a combined market cap of $300 billion USD, larger than India’s richest man Mukesh Ambani’s Reliance Industries, which stands around $238 billion USD.

Nifty IT Index One Year Chart as of 29th July 2025

India’s IT Business Model and Artificial Intelligence

Indian IT companies operate on a model of software servicing for offshore clients, typically via medium to long-term contracts. Their business operations are embedded across the globe thanks to affordable pricing and the quality of services provided by Indian software engineers. Now, this model is being threatened by the rise of Generative AI and taking it lightly would be a serious mistake by India.

Shares of major IT companies ­- TCS, Infosys, Wipro, and HCL have delivered lackluster returns since their post pandemic rally. Since Covid high valuations amid deal pessimism were a concern. Now those worries are amplified by AI and the disruption it brings to their business models. Software exporters remain the worst performers, the Nifty IT index is down 18% year-to-date, underperforming the broader index consequentially.

The recent release of Q1 fiscal year 2026 numbers from these four IT companies have been met with skepticism regarding forecasted outlook. Analysts noted that Indian IT firms are grappling with margin pressures amid persistent macroeconomic headwinds and rising threats from AI-driven productivity improvements. In response, companies have started to protect their margins with layoffs, TCS (Tata Consultancy Services) shed around 2% of its workforce this past weekend which could affect more than 12,000 jobs.

Time For India’s IT Sector to Become Proactive

Pricing models that IT companies charge customers are changing from long to short-term flexible contracts like ‘pay as you go’ over traditional fixed annual licensing models. Despite changing CEOs in several of these companies over the last few years, animal spirits are failing thus far to innovate AI products that can enhance the bottom line. Instead, companies prefer share buybacks and paying stellar dividends to appease the shareholders rather than to invest in R&D especially when their core model is under threat.

Hang Seng Index One Year Chart as of 29th July 2025

The euphoria surrounding India’s $5.4 trillion equity market is cooling in 2025, amid concerns over slowing earnings growth, elevated valuations, and tariff related uncertainty. At the same time, sentiment towards Hong Kong’s listed Chinese shares are improving with global fund managers rapidly reallocating capital to that market. The Hang Seng Index has delivered an impressive 27% return year-to-date. Meanwhile, India’s stock market still lacks depth for investors seeking meaningful exposure to the booming Artificial Intelligence theme.

Indian IT companies excel at scaling and delivering AI solutions for global clients, but they do not own the core models, platforms, or consumer data needed to become true AI disruptors like China’s tech giants. The industry contributes approximately 7.5% to India’s GDP and remains the primary employment avenue for engineering graduates. It’s time for India’s IT sector to proactively address the growing AI threat posed by global competitors.

post39

AI Noise and Manipulation Feared as a Potential Threat

AI Noise and Manipulation Feared as a Potential Threat

Yesterday’s AI generated graphic which claimed an explosion had happened around the Pentagon in Washington D.C. sent equity indices into a brief selloff mode. However, the graphic was soon proven to be false news as people in Washington confirmed there had been no explosion.

AI has the capacity to cause surprise storms if some people try to trigger manipulation in the financial world and elsewhere by using ‘false’ data and graphics. ‘Bad actors’ within A.I will likely be compared to ‘ransomware’ folks in the world of high-tech, and people and institutions will have to react quickly to distinguish between fact and fiction. The ability of AI to manipulate the markets yesterday is only the beginning and we need to be prepared for more stories like the Washington D.C fake.

AI Mania is Building in the Media and People are Concerned about Wrong ‘Facts’

AI machine learning is coded by people and some of them are prone to bias, which raises the specter of bad input being used in systems that serve the public and clients in an ill-fated manner. Putting all of our trust into an AI system is wrong minded, just as we do not put all of our trust into Wikipedia information, and are aware facts should be checked on within a variety of sources.

Yesterday’s deep fake AI graphic highlights the need for financial markets to discern in a timely fashion attempts to manipulate narrative. Certainly some traders got hurt during yesterday’s reaction to the false report of an explosion in Washington. The dishonest graphic made instant news globally, and social media gadflies raced to report ‘the explosion’ and then had to quickly say they had been tricked. Data bias in AI is just as problematic and perhaps more dangerous, because what is presented as facts will always have to be given critical consideration by its users.

The prospect of bias producing arrogant AI systems ‘tools’ full of hubris as they assert ‘truth’ could develop and create self-perpetuating machines full of wrong details. This could happen as AI searches the internet for information and relies on data that is poor, and uses statistics from its own system posted elsewhere which could manifest falsehoods. The prospect of AI using its own potentially bad coding, and previous input distributed into other information networks in theory could lead to stubborn ecosystems which insists that they are correct, when they are actually not accurate.

Middle of the Road Results will make Users Choose Direction Sometimes

Public AI systems tend to frequently deliver ‘middle of the road’ result aggregates so they do not offend, leaving the users with mixed insights and without a firm stance. Perhaps if users understand this circumstance it can be perceived as a good outcome, because the person will have to do their own critical thinking while choosing direction. There is a danger that politically correct thinking which is coded into AI could lead to more vanilla and less flavor. The fear of offending people with facts may become a danger for AI, and coders will have to decide how to program searches as they produce objective and subjective outcomes.

Learning has changed as the internet has grown more robust with ‘facts’. Students often do not feel it is necessary to master particulars by reading a range of books. Instead they tend to rely on their mobile phones and laptops for their knowledge, avoiding in depth study on their own which would offer more insights and create critical thinking. This can and does lead to the use of ‘expertise’ produced by the internet which is incorrect.

Let’s also consider the notion that public use of artificial intelligence has won a large amount of publicity in the past year, but machine learning capabilities have in fact been used for a long time. The media has done a fairly good job of stirring the masses into a furor, and solid marketing has led to AI being the center of conversation the past handful of months.

AI is far from perfect because it is being built by flawed humans. In 1952 IBM via Arthur Samuel built a program allowing a computer to play checkers and learn how to improve its outcomes through ‘play’. In 1997 an IBM system called Deep Blue beat world chess champion Gary Kasparov in a six game match. The 45 year gap should be noted as we contemplate how AI will develop in the future.