Opinions expressed by Entrepreneur contributors are their own.
While AI has been around since the 1950s, it has only recently become a household name with the introduction of OpenAI's ChatGPT less than two years ago. As an investor in both OpenAI and xAI, I constantly wonder ethical ramifications of this new technology, how intelligent it is and whether it is the best way forward for humanity. As an experiment, I asked ChatGPT her opinion on artificial intelligence (AI) systems in today's society, and this is what she had to say:
“As a linguistic model of artificial intelligence, I have no personal opinions or beliefs. However, I can provide an objective analysis of AI systems in today's society. AI has brought many benefits to society, such as improving efficiency, accuracy and decision-making processes has also contributed to advances in medical research, transportation, and other industries. However, the deployment of AI systems in today's society also presents potential risks, such as job displacement, privacy concerns, and the possibility of biased algorithms. It is therefore essential to ensure that AI is developed ethically and responsibly to mitigate these risks and to ensure that the technology benefits society as a whole, AI systems have great potential to positively impact society careful consideration and ethical responsibility.”
While this response shows that the AI is still robotic and objective in its methods of “thinking,” it is also, in fact, aware of the ethical concerns associated with its use. At this critical point in history, where we sit on the precipice of mass adoption of AIwe still can't seem to decide if HE will be our savior or our villain. After all, it's about apps and responsible management of AI systems.
As an investor, I'm in the unique position of being able to help propel these new AI companies forward. In the last two years, we have seen a “golden people” of AI startups. Many of them have raised impressive rounds; many have failed and some have already seen acquisitions. However, the first two I invested in OpenAI and xAI are now popular. The first for its ability to meet the challenge of mass global adoption and the second for its commitment to building a more ethical system for the AI ecoverse.
When it comes to AI applications, it's safe to say that humans have always tried to create tools to make our lives easier; this is not new. Historically, “mechanical muscles” and outsourcing displaced physical labor. Now, we are at the point where “machine minds” will unleash a cascade of applications across industries. Automation has evolved from more than just physical.
A major development pushing the frontier of AI adoption is Microsoft's continued investment in OpenAI. This partnership will likely lead to everything from optimized Excels to AI-generated PowerPoints and even more email management support. Upper class labor is now also ripe for disruption.
In finance, AI can be used in various ways. AI algorithms can identify patterns of behavior, such as unusual transactions or identity theft, allowing financial institutions to quickly detect and prevent activities. AI can also analyze market trends and make assumptions about future trends, helping institutions make more accurate decisions.
While this is just one example of industry transformation, there are some areas of work that will be less affected by AI. These industries are those that operate in information asymmetry – such as early stage venture capital. For AI to work effectively, the model must have access to data. Industries whose data is private, fragmented and complex cannot train and build inferences in the same way as a model based on publicly available information. The power of AI depends on the value of the data and its underlying model, which in turn depends on the quality of the rules defined by the humans designing the algorithms.
In terms of management, we must continue to maintain an element of skepticism and criticism as the growing adoption of AI increases. These tools will continue to evolve, but they should not be treated as an omniscient source of truth. Again, this is critical because these systems only know what is in their underlying systems. As people, investors, and business-minded individuals, we must acknowledge the power of these systems, while also taking into account the fact that they must be constantly maintained. As AI uses an ever-evolving algorithm that learns from itself and experiences, we must continue to fine-tune the rules and data sources that support these AI systems.
Diversity of thought and perspectives is critical for those who have the power to develop these systems. A system created by people with inherent biases will also be biased. We must maintain human values in controlling systems. In today's society, where truth seems to be subjective, belief in these systems can be extremely beneficial and extremely harmful. We must refrain from programming systems to tell people what they want to hear. It is important that people understand and respect the limitations of AI.
At the same time, just because humanity is capable of doing something doesn't always mean it should. We can replace most jobs with AI, but should we? Where do we draw the line? How can we ensure honesty and integrity in future systems and platforms? This is part of what xAI and OpenAI are committed to addressing and why I have trusted and supported them from the start.
Artificial intelligence will undoubtedly transform all of our lives, but this transformation will only be positive if we continue to be critical stewards of truth and information. As investors, I encourage everyone to maintain a healthy dose of skepticism when investing in AI-powered solutions. Look at the human beings behind those systems because their beliefs are what teach and drive the solution.