The cognitive implications of a future based on taste
And what AI use will tell us about how humans imagine and create
The age of humans owning all cognitive skills is over. This is what many have been progressively thinking over the last couple of decades. Whilst many hard skills in the real world - like plastering a wall or building a bridge - are (for now) out of AI’s reach, many digital ones are already up for grabs. Today’s AI can do engineering design, data analysis, run sales pipelines, and so much more.
In this new world, the more expressive hard skills have also (surprisingly quickly) joined the firing line. Maybe in a few years we’ll no longer need painters to make beautiful landscapes, graphic designers to make content, cinematographers to make movies, writers to produce articles, or composers to make music (or at least we won’t in a couple years).
In a future where many of these hard skills can be done by autonomous digital agents, there’s a growing exploration of the future role of humans in societies and economies. A take I’m finding particularly interesting is in the role of ‘taste’.
Taste is King
If we don’t need to do the manual tasks (or can’t do them as well as AI, so don’t bother), then perhaps humans in general ‘move up a level’. Instead of rolling up our sleeves and getting into it, we become the editors in chief or creative directors of the world. In this situation, what people really have to offer is their ‘taste’ - we move from creatures of creation to creatures of curation.
This isn’t totally new to our society. Creative directors of fashion houses don’t go and cut fabric themselves, movie directors rarely the camera when making a film, and the world’s biggest artists often don’t touch a paint brush (Damien Hirst has a primary studio in my home town and I know the people who physically put things together).
“I have no technical ability. And I know nothing about music….I know what I like and what I don’t like, and I’m decisive about what I like and what I don’t like.”
Rick Rubin (one of the most successful music producers of all time) - source
I can see this happening in other places. Already, with popular tools like Cursor AI and voice tech that’s getting ever better, we’re watching explorations of ‘vibe coding’ (a concept made popular by Andrej Karpathy) - where engineers don’t write a single line of code by hand, but instead describe by voice what they’d like to see and let AI do the rest. This really is humans imagining something and simply bringing elements together to make it happen.
We can imagine a world where writers simply outline the arguments they want to make, vet the gathered evidence that seems apt, outline the communicative tools they’d like to incorporate, and then let AI do the rest. Designers need not sit down at their Apple Mac for 8 hours a day, but instead simply pull inspirations together for an AI systems to utilise and describe what they want the overall vibe of a visual to be. In this same world, humans wouldn’t have to trudge through the internet or databases to find the things they want to read, they could simply outline the sort of information they want and the actual work of finding things could be done for them e.g. with Deep Research models. The creator of a new information curation app I’ve been playing with - Sublime - is already on the taste band-waggon. A recent a16z podcast, with Adobe’s Chief Strategy Officer explored this concept of AI in the creative process in interesting ways and I’d recommend giving it a listen.
This will sound like a dream to some and a nightmare to others. Personally, I don’t hate this framing. I tend to think of myself as someone with fairly good taste, and a world in which my ability to enact things isn’t locked behind hours (if not years) of training to learn a new software or skill sounds pretty good.
In this world, what matters is ideas - whilst communication becomes an activity somewhat enacted on our behalf (a theme I’ve explored elsewhere). Therefore, behind this shift is an age old philosophical query that I think is worth brining into the discussion.
Cognition and Communication
"The limits of my language mean the limits of my world."
Ludwig Wittgenstein
A dominant exploration in the world of philosophy and theory of mind is whether thought and communication are intrinsically linked, or whether they’re totally separate facets and acts.
There is obviously much more nuance, but the debate essentially comes down to two sides:
Our ability to think is limited by our ability to conceptualise and communicate ideas
We know what we think and communication is simply a means of expressing these ideas if we want to
If AI can now do the more communicative and doing tasks for us and we, as humans, move up to the curators and directors of the world, will our own unique ability to make sense of the world shift?
All of the creative forms I describe above are, after all, communicative. I make art that can be seen, music that can be heard, and writing that can be read. Whether I share it with others or not, these are codified expressions of ideas now sitting outside of my own mind.
The implications
If we assume AI will keep getting better at all tasks and we assume that societal adoption will continue to grow, then it’s feasible to imagine a fairly near-term future where we’re using these AI tools actively for the above types of task.
If cognition and communication are linked
If we indeed actually need the doing to do the ‘thinking’ - and we’re no longer doing the doing - this could, over time, have an impact on human cognition. Whether it’s negative or positive largely relies upon the ways we use AI.
If we off-load a lot of this work to AI, trust its outputs fully, and stop doing much active thought altogether, then we could see a decline in human cognition over time. If humans no longer do the coding, the writing, or the painting, then there’s a world in which our thinking becomes limited and progression by humans blocked.
If, however, these tools lower barriers of communication for those who might have historically struggled, this world could in fact massively expand our ability to think and create. If I didn’t have the words I needed to conceptualise something, AI could find the word for me. If I can’t make diagrams but need a visual to truly wrap my head around a concept, AI could help me do this. If I need a conversational partner to think soemthing through, AI could be that partner (as I’ve previously explored). In this case, more ideas could be brought into existence and our cognitive abilities overall would increase.
If cognition and communication are separate
If this theory is true, then I see no real risk to human cognition. This would likely just mean brilliant ideas are communicated better and by more people.
One main shift, however, might be in ideas of status. Currently, those with the ability to communicate their ideas in impactful ways hold higher positions in society - whether they have better ideas or not (I’m sure we can all think of someone incredibly eloquent but who really never says anything interesting or new…). These people will no longer have a natural upper hand, but may in fact be replaced with folks who are tooling masters and can now subsequently communicate better (whether they’re better thinkers or not).
Finding answers
This an old debate. It’s an old debate because there’s no objective easy answer. I personally need to write or talk to make sense of complex ideas, but I know people who can do everything in their head and simply deliver the ideas when they’re needed. Therefore your view and mine of this conundrum might be very different.
Since there is no unified answer, there is no unified direction of travel for technological development. However, I think some things could be answered none the less. As much as we might be unable to answer this human question in the pursuit of better tech, building the tech may help us answer the human question. Not only learning from the development of the foundational technology itself, but by one of these theoretical futures coming true.