Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Humans make nearly 35,000 decisions each day, and each decision involves evaluating options, recalling similar past situations, and feeling reasonably confident about the right choice. With its real-life potential, artificial intelligence (AI) is making more human-like decisions and influencing how people do things.

AI significantly enhances human decision-making by providing powerful tools for data analysis, prediction, and automation. However, it does not truly "think" like humans and comes with challenges that require careful management to ensure ethical, fair, and responsible use.

Thomson Reuters describes neural networks as a computational method that mimics the human brain's data processing capabilities. Neural networks, a form of machine learning (ML), utilize interconnected nodes (or artificial neurons) to derive insights from extensive datasets.


Humans Versus Neural Networks

In the study of cognitive processes like human decision-making, extensive research has been conducted. To understand and predict how individuals make decisions, quantitative models of human decision-making have significantly contributed to research in both the social sciences and engineering.

In one of the studies, researchers trained exploratory deep neural networks (DNNs) and found out that predictable decision patterns that are not solely reward-oriented may contribute to human decisions. Importantly, they demonstrated how theory-driven cognitive models can be used to characterize the operation of DNNs, making DNNs a useful explanatory tool in scientific investigation.

Currently, researchers at Georgia Tech are working on RTNet, a neural network that mimics human decision-making by incorporating variability and confidence in its choices. This network not only matches human performance in digit recognition but also improves accuracy and reliability with traits like confidence and evidence accumulation.

When considering whether AI can think like humans, it's important to recognize the fundamental differences. AI can mimic certain aspects of human thinking, such as pattern recognition and problem-solving, but it lacks true understanding. AI processes data and generates outputs based on learned patterns without genuine comprehension or consciousness. It doesn't experience emotions, self-awareness, or subjective experiences as humans do.

For example, large language models (LLMs) often "hallucinate" by confidently presenting incorrect or unjustified data. Unlike humans, who would admit uncertainty, LLMs may fabricate answers even when it is not precise. Moving forward, developing more human-like neural networks could improve accuracy and prevent misleading information.

After obtaining results from their model, Georgia Tech researchers compared them with those of human participants. Sixty students reviewed the same dataset and reported their confidence levels. The researchers then found that the neural network's accuracy, response time, and confidence patterns closely mirrored those of the human participants.

The research team hopes to train the neural network on more varied datasets to test its potential and apply this model to other neural networks to enable them to rationalize more like humans.

In an effort to understand “AI hallucination,” Telecom Review posed the question, “Find the name of an AI Act in the UAE” to LLM, ChatGPT. It responded with, “The United Arab Emirates (UAE) has been proactive in establishing the UAE AI Act aimed at fostering responsible AI usage,” citing irrelevant hyperlinked sources. Upon analyzing its answer, the journalists and editors at Telecom Review deemed the information inaccurate (no AI Acts have been established in the UAE thus far) and, ultimately, a fabrication designed by ChatGPT to please the user’s initial query.

Could this machine-learned need to please the user be considered a developing conscious ability in AI?

While machine learning can speed up the discovery of predictive models for human judgments, these models often suffer from limitations such as small datasets and poor interpretability. However, another study suggests that combining large datasets with machine-learning algorithms holds great promise for revealing new cognitive and behavioral phenomena that would be hard to uncover otherwise.

According to one study, decision-making models created by human researchers generally outperform machine-learning models when using data volumes typical of past behavioral research. But this trend shifts when larger datasets are available, suggesting that the complexity of psychological theories has been limited by the scope of data previously used.

As the world begins to move into a regime governed by big behavioral data, theories will need to become increasingly complex to be able to capture the systematic variation that these larger datasets possess.

Currently, AI demonstrates narrow intelligence, excelling in specific tasks but lacking in general intelligence that enables humans to understand and learn any intellectual task. Achieving general AI, capable of thinking and learning as humans do, remains an aspirational goal and is far from being realized.


AI
: Enhancing Decision Making Across Various Fields

Cognitive computing blends machine learning, language processing, and data mining to support human decision making. With this in mind, AI-powered systems leverage historical data to predict outcomes with high accuracy, aiding critical decisions. MYCOM OSI, a leader in telecommunications service assurance, is leveraging AI to revolutionize decision-making processes for Communication Service Providers (CSPs) with its latest launch, EAA GenAie.

AI is being observed in in healthcare, where it can be used to forecast disease outbreaks or suggest personalized treatments. Etisalat’s healthcare platform is transforming the UAE's medical sector by leveraging AI to empower healthcare providers with advanced data-driven decision-making tools. In the realm of early disease detection, VR research has revealed promising advancements in identifying early signs of Alzheimer's risk. In another groundbreaking development, engineers at the University of Waterloo have designed a highly efficient antenna small enough to be housed within a ring, capable of transmitting crucial medical data to both healthcare providers and individual patients.

AI is also being used to predict market trends and evaluate risks, leading to better investment decision making.

By automating routine tasks, AI enables humans to concentrate on strategic and creative endeavors, fostering innovation and efficiency.

On the contrary, the integration of AI into decision-making processes is not without its challenges. Since AI systems learn from data, any biases in the data are likely to be mirrored in the AI's outputs, potentially resulting in unfair or discriminatory outcomes.

Along with this, the opacity of decision-making processes in complex algorithms, such as deep learning, complicates the identification and correction of these biases.

Another concern is the risk of over-reliance on AI, which could diminish human decision-making skills and critical thinking. As we increasingly depend on AI for routine tasks, we are less likely to make independent, informed decisions. Moreover, ethical issues arise regarding responsibility and accountability when AI-driven decisions fail, such as in autonomous car crashes or poor investments made using AI-powered financial systems.

In conclusion, the synergy between human intuition, creativity, and AI's computational prowess can lead to remarkable advancements if balanced. As we navigate this evolving landscape, the key lies in harnessing AI's potential for decision-making while addressing its limitations and ethical implications.

Pin It