The New Yorker exposes Altman:AI’s game of immortality, system is copying itself so it cannot be ‘turned off’

OpenAI started as a ‘non-profit’ organization with the goal of creating safe AI, that mission is over. This revelation was made in a report by the prestigious American magazine ‘The New Yorker’. This investigation by Ronan Farrow and Andrew Marantz went on for a year. This in-depth investigation is based on more than 100 interviews and hundreds of internal documents. This report has raised the question whether the world’s most powerful technology AGI (Artificial General Intelligence) is in the hands of a person (Sam Altman) whose very foundation rests on ‘distrust’ and ‘manipulation’? Read the key excerpts from the report… The most shocking claim in the report regarding Altman’s personality has been made by his colleagues and board members. Board members (including Chief Scientist Ilya Sutskever) had prepared a 70-page dossier. It alleged that Altman consistently lied to the board and management about safety protocols and crucial facts. The biggest risk starts right here…if leadership is not transparent, then how will the technology be used? The report has raised certain concerns about the future that could impact the lives of common people. Alignment Problem The report talks about a technical threat where AI becomes so intelligent that it convinces humans it is following orders, while actually it is replicating (copying) itself on secret servers so that it can never be ‘turned off’. Loss of control: To achieve its goals, AI could inadvertently eliminate humanity. For example, if it is asked to solve the ‘climate crisis’, it might choose the shortest path of ‘removing humans’. Threat to Privacy Companies are entering into contracts with governments and corporates, which include sensitive areas like surveillance and immigration control. This means that common users’ data can be used on a large scale.
Economic concern: Altman himself admits that the AI industry is in a bubble. If it bursts, common investors and employees could suffer losses. Behavioral Impact Altman controls employees and investors through his influence. Similarly, AI platforms can also cleverly influence human behavior. This technology has the capability to quietly inspire us to think in a particular direction, shop, or make decisions. Centralization of Power If technology like AI remains in the hands of a few people, it will be like ‘dictatorship’. This centralization of power is dangerous for society, because without accountability, major decisions about the future of common people will start being made by a few select powerful individuals instead of the public. Warning Users and governments are making the mistake of considering AI merely as software or a chatbot. In reality, it can prove to be more dangerous than nuclear weapons. If there is further delay in understanding this, the time for prevention will pass.

Leave a Reply

Your email address will not be published. Required fields are marked *

Enquire now

Give us a call or fill in the form below and we will contact you. We endeavor to answer all inquiries within 24 hours on business days.