Advancements in technology such as artificial intelligence raise a lot of questions. Philosophers can help us answer them.

When people ask me what I do for a living and I tell them that I’m a philosopher, their expression often evinces a mix of embarrassment and amusement.

The embarrassment is no doubt because they don’t have a very clear idea of what my line of work would involve, so they don’t know quite how to ask a polite follow-up question.

The amusement, I suspect, is that insofar as they can conjure up an idea of philosophers, these would be robed figures with long white beards, disengaged from the world, and saying abstruse and esoteric things. I’m sure the thought that someone might actually be paid to engage in such activities seems rather preposterous.

So we clearly need to do a better job of explaining what philosophy is about, who philosophers are, and what they do!

Philosophers do still count some white beards in our ranks, but a good deal of the most interesting and innovative work in our field is being done by young women.

And while philosophy does often involve abstraction, philosophers generally think one of their primary tasks is to banish abstruseness and uphold standards of clarity and rigour in language and argument.

Moreover, many philosophers are deeply engaged with the practical issues faced by individuals and societies, rather than standing back from them.

Self-driving cars raise philosophical questions. Photo: Flystock/Shutterstock.com

One such issue, which has recently been getting a great deal of attention in the popular media and is of particular interest to young people, is the development of new technologies and new forms of artificial intelligence (AI) in particular.

Should we generally regard these awesome achievements with delight, or with dread, or with something in between? More specifically, how (if at all) should we regulate and control impressive new devices like self-driving cars or deal with the risks involved in enabling more advanced forms of machine learning?

To some degree these questions are technical and empirical, relying on our best predictions about how societies will evolve in response to these innovations. But they are also philosophical.

Self-driving cars, for example, will need to be programmed to make decisions that involve weighing the risks of different harms and benefits and balancing competing values. When faced with the options of driving into a tree at risk to the life of the driver or ploughing into a pedestrian who has wandered out into the middle of the street, these devices will require algorithms that guide them in which option to go for.

Is self-sacrifice required to protect others, and to what extent? Should the lives of those who have wandered into the street be discounted relative to those who have remained on the footpath?

Human beings, rather than machines, must consider how to resolve these practical conflicts and challenges, and doing so will require philosophical reflection.

Another challenge in thinking about new technologies is that it is often quite difficult to predict with confidence where they will lead us, and hence what we can and ought to do to regulate and govern them.

How, for example, will new forms of AI transform human life? Some suggest that it may pose a serious existential threat, while others depict its facilitation of a safer, less burdensome, and more bountiful human future.

It is not clear how likely either of these scenarios (or more intermediate alternatives) are. Yet we need now to make decisions about how such technologies will be developed, or perhaps what might be done to prevent their development.  

How should we go about making such decisions? If we don’t have even a relatively imprecise understanding of the probable outcomes of different policy choices, weighing up the expected costs and benefits of these choices is not a straightforward matter.

Can we even choose rationally between policies under such circumstances, and if so what would that mean? Here too we must turn to philosophy – in this case ethics and decision theory – to consider which approaches to these questions make sense.

You may also like

Article Card Image

Is digital piracy really stealing?

The practice of digital piracy raises both legal and moral questions.

Article Card Image

Democracy Sausage: The danger of lost hope

International relations scholar Charles Miller joins Democracy Sausage to discuss the conflict in Ukraine and Putin’s ‘re-election’.

Article Card Image

Capturing the present, past and future of Australian feminism

Trailblazing gender equality ambassador and pioneers and the next-gen of leaders will outline the next steps needed to reach gender equality in major ANU event.

Subscribe to ANU Reporter

Anu Logo

+61 2 6125 5111

The Australian National University, Canberra

CRICOS Provider: 00120C

ABN: 52 234 063 906

EDX Logo
APRU Logo
IARU Logo
Group of eight Australia Logo