Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Exploring generative AI, this technological advancement fosters the concept of getting things done in a smarter way as it uses deep-learning models to create something new: an image, text or video. The popular ChatGPT, for example, only needs to be prompted with words or phrases, and within seconds, it delivers a human-like generated text.

Yet in reality, companies will still need humans to refine generative AI’s outputs as it still has the tendency to give inaccurate answers. This is especially true in cases of predicting what could happen or what has already happened with error-prone or insufficient data feeds at hand.

ChatGPT and its alternatives, including Google Bard, Bing AI, Jasper Chat and Microsoft DialoGPT, can either over- or under-generate content. They can produce both truths and fictions, endorsing ethical and unethical decisions alike. Since it is technology-driven and without morals impacting its decision-making, it exhibits indifference to the consequences of its outcomes.

However useful these programs may be in various domains, experts argue that, from the perspective of the science of linguistics and the philosophy of knowledge, these AI tools differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, burdening them with intrinsic defects.

Despite a flawed conceptualization of language and knowledge, true intelligence is capable of moral thinking. This means achieving a balance between our mind’s creativity and a set of ethical principles that, as a result, can determine what should and should not be. To be useful in the long run, ChatGPT must be empowered to generate more novel-looking output and steer well clear of morally objectionable content. But the developers of this AI model and other machine learning marvels will likely continue to struggle to achieve this.

The human mind is not like ChatGPT or any other statistical engine designed for pattern matching. It is a surprisingly efficient and elegant system that operates with small amounts of information to create explanations among data points and not mere correlations.

True intelligence itself deals with data, information and knowledge representations for cognition and reasoning, understanding and learning, problem-solving, predictions and decision-making — a full interaction with the environment. These are being matched by AI at present, but with sizable limits.

A word of caution to anyone using generative AI: since the technology is still in its early stages of maturity, it remains unreliable at many points, even producing offensive responses and, in the classroom setting, plagiarized content.

According to a distinguished neuroscience professor, true intelligence can be defined as a process of self-replication that promotes — rather than interferes with — the replication of the genes responsible for its creation, including necessary hardware like the brain. “Without this constraint, there is no objective criteria for determining whether a particular solution is intelligent,” according to Dr. Daeyeol Lee. Machines will serve as surrogates for human intelligence. But unfortunately, this still leaves open the possibility of prominent “mishandling” by the very people controlling the AI.

Implicit bias, poor data and people's demanding expectations imply that AI will never be perfect. Indeed, despite being digital and mathematical, AI algorithms can still be incorrect and categorized as “underfitting” (too simplistic) or “overfitting” (too complex).

If the initial data used to program and train machine-learning algorithms such as ChatGPT is limited or historically biased, the result can be a kind of “digital discrimination” — a mismatch between the data the AI was trained and tested on and the data it encountered in real life. This presents the problem known as “data shift.”

An even more challenging problem is that of “underspecification” within these algorithms, requiring full specification and testing for a model well beyond standard predictive performance.

“We need to get better at specifying exactly what our requirements are for our models,” Alex D’Amour, the leader of a Google-led research study, explained. “Because often what ends up happening is that we discover these requirements only after the model has failed out in the world.”

Developing the fix is vital if AI is to have as much impact outside the lab — indeed when AI underperforms in the real world, it makes people less willing to use it.

Technological Singularity: A Potentially Dangerous Future

Taking all of this into consideration, any man-made intelligence — developed in a combination of logical general AI and statistically narrow AI — brings forth the debate as to whether such technology can surpass human intelligence.

This “technological singularity” is defined as a hypothetical future point at which technological progress becomes so rapid and exponential that machines are able to design and build even more advanced machines, and so on, eventually becoming superior to humans.

This could lead to a runaway effect of ever-increasing intelligence, creating a scenario where humans are unable to understand or control the technology they have created. Some proponents of singularity argue that it is inevitable, while others believe that it can be prevented through the stringent regulation of AI development.

In the expanding world of AI, the concerning notion of a singularity looms large; it would be enormously difficult to predict where it begins and nearly impossible to know what would lie beyond such a daunting technological horizon. AI researchers are currently studying this possible singularity phenomenon by monitoring and measuring AI’s progress in approaching the unique skills and abilities of the humans it mimics.

Pin It