Machine learning: 9 challenges
|
Posts: 14,921
Threads: 9,730
Thanks Received: 9,124 in 7,276 posts
Thanks Given: 9,938
Joined: 12 September 18
27 May 21, 06:38
(This post was last modified: 27 May 21, 06:40 by harlan4096.)
Quote:
What surprises do machine learning have in store for us? How difficult is it to trick a machine? And will we end up with Skynet and rise of the machines? Let’s take a look.
The future will probably be awesome, but at present, artificial intelligence (AI) poses some questions, and most often they have to do with morality and ethics. How has machine learning already surprised us? Can you trick a machine, and if so, how difficult is it? And will it all end up with Skynet and rise of the machines? Let’s take a look.
Strong and weak artificial intelligence
First, we need to differentiate between two concepts: strong and weak AI. Strong AI is a hypothetical machine that’s able to think and is aware of its own existence. It can solve not only tailored tasks, but also learn new things.
Weak AI already exists. It is in applications made to solve specific problems, such as image recognition, car driving, playing Go, and so on. Weak AI is the thing we call “machine learning.”
We don’t know yet whether strong AI can be invented. According to expert surveys, we’ll have to wait another 45 years. That really means “someday.” For example, experts also say fusion power will be commercialized in 40 years — which is exactly what they said 50 years ago.
What could go wrong?
It’s still unclear when strong AI will be developed, but weak AI is already here, working hard in many areas. The number of those areas grows every year.
Machine learning lets us handle practical tasks without obvious programming; it learns from examples. For more details, see “How machine learning works, simplified.”
We teach machines to solve concrete problems, so the resulting mathematical model — what we call a “learning” algorithm — can’t suddenly develop a hankering to defeat (or save) humanity. In other words, we shouldn’t be afraid of a Skynet situation from weak AI. But some things could still go wrong.
1. Bad intentions
If we teach an army of drones to kill people using machine learning, can the results be ethical?
https://youtu.be/TlO2gcs1YvMA small scandal broke last year surrounding this very topic. Google is developing software used for a military project called Project Maven that involves drones. In the future, it may help create completely autonomous weapon systems.
As a result, 12 Google employees resigned in protest and 4,000 more signed a petition requesting the company abandon the contract with the military. More than 1,000 well-known scientists in the fields of AI, ethics, and IT wrote an open letter to Google, asking the company to abandon the project and support an international agreement that would ban autonomous weapons.
2. Developer bias
Even if machine-learning algorithm developers mean no harm, a lot of them still want to make money — which is to say, their algorithms are created to benefit the developers, not necessarily for the good of society. Some medical algorithms might recommend expensive treatments over the treatments with the best patient outcomes, for example.
Sometimes society itself has no interest in an algorithm becoming a moral paragon. For example, there is a compromise between traffic speed and the car accident death rate. We could program autonomous cars to drive no faster than 15 mph, which would almost guarantee to bring the number of road fatalities to zero but negate other benefits of using a car.
3. System parameters not always include ethics
Computers by default don’t know anything about ethics. An algorithm can put together a national budget with the goal of “maximizing GDP/labor productivity/life expectancy,” but without ethical limitations programmed into the model, it might eliminate budgets for schools, hospices, and the environment, because they don’t directly increase the GDP.
With a broader goal, it might decide to increase productivity by getting rid of anyone who is unable to work.
The point is, ethical issues must be incorporated from the very beginning.
...
Continue Reading
|
Users browsing this thread: 1 Guest(s)
|
Welcome
|
You have to register before you can post on our site.
|
Birthdays
|
Today's Birthdays
|
(37)ixoqe
|
Upcoming Birthdays
|
(42)lapedDow
|
(48)rituabew
|
(36)omyjul
|
(40)papedDow
|
(49)ArnoldFum
|
(37)yfaza
|
(48)Kevensi
|
(38)boineDon
|
(39)Grompelbawn
|
(40)vkseogaF
|
(36)usogy
|
(39)ywixazok
|
(35)pa.OpenTran
|
Online Staff
|
There are no staff members currently online. |
|

|