One of the great challenges we are faced is the definition of artificial intelligence.
Google, September 12, 2023 - "Artificial intelligence (AI) is a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more." Microsoft, January 26, 2023 - "An artificially intelligent computer system makes predictions or takes actions based on patterns in existing data and can then learn from its errors to increase its accuracy." IBM, July 6, 2023 - "Artificial intelligence (AI) is technology that enables computers and digital devices to learn, read, write, create and analyze." NVidia, March 21, 2023 - "Artificial intelligence is the ability of a computer program or machine to think and learn without encoded demands." HPE, March 14, 2022 - "Artificial intelligence (AI) broadly refers to any human-like behavior displayed by a machine or system." ChatGPT, April 14, 2024 - "Artificial intelligence (AI) is a technology that enables computers and machines to mimic human-like behaviors such as seeing, understanding language, creating content, and making decisions." The agreeable take-aways are:
Let's take a step back. What precisely is 'intelligence'? According to Britannica, "Human intelligence is, generally speaking, the mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to control an environment." With this simple caveat, "However, the question of what, exactly, defines human intelligence is contested, particularly among researchers of artificial intelligence, though there is broader agreement that intelligence consists of multiple processes, rather than being a single ability." Both definitions are open ended, not solid. Each requires a 'leap of faith'. Here's the deal - how can we attempt to regulate something we are creating to imitate something we know so little about? How can we expect to manage the outcomes of process we barely comprehend, or debate the existence? I don't know what is more dangerous, letting artificial intelligence roam, grow and develop 'organically' or attempting to adhere to a specific group of humans' desire to create Ai in their own image. We've done this since the beginning of time in order to make the unimaginable, understood, at least temporarily. So we move forward, in the darkness of infinity, small lantern in hand and our narrow field of view. One thing certain, we will bump into something - I think we'll collide with a mirror image of ourselves. Are we The Modern Prometheus ?
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
Topics & Writers
All
AuthorsGreg Walters Archives
September 2024
|