Philosophy is critical to a high-tech future

When people ask me what I do for a living and I tell them that I’m a philosopher, their expression often evinces a mix of embarrassment and amusement.

The embarrassment is no doubt because they don’t have a very clear idea of what my line of work would involve, so they don’t know quite how to ask a polite follow-up question.

The amusement, I suspect, is that insofar as they can conjure up an idea of philosophers, these would be robed figures with long white beards, disengaged from the world, and saying abstruse and esoteric things. I’m sure the thought that someone might actually be paid to engage in such activities seems rather preposterous.

So we clearly need to do a better job of explaining what philosophy is about, who philosophers are, and what they do!

Philosophers do still count some white beards in our ranks, but a good deal of the most interesting and innovative work in our field is being done by young women.

And while philosophy does often involve abstraction, philosophers generally think one of their primary tasks is to banish abstruseness and uphold standards of clarity and rigour in language and argument.

Moreover, many philosophers are deeply engaged with the practical issues faced by individuals and societies, rather than standing back from them.

One such issue, which has recently been getting a great deal of attention in the popular media and is of particular interest to young people, is the development of new technologies and new forms of artificial intelligence (AI) in particular.

Should we generally regard these awesome achievements with delight, or with dread, or with something in between? More specifically, how (if at all) should we regulate and control impressive new devices like self-driving cars or deal with the risks involved in enabling more advanced forms of machine learning?

To some degree these questions are technical and empirical, relying on our best predictions about how societies will evolve in response to these innovations. But they are also philosophical. Self-driving cars, for example, will need to be programmed to make decisions that involve weighing the risks of different harms and benefits and balancing competing values.When faced with the options of driving into a tree at risk to the life of the driver or ploughing into a pedestrian who has wandered out into the middle of the street, these devices will require algorithms that guide them in which option to go for.