
Don’t Let AI Steal Your Tacit Knowledge
A recent, and extremely controversial, Palantir essay advocates for a deep enmeshment of the state with artificial intelligence for the purpose of national security. Karl et. al make the case that those who are unwilling or unable to integrate artificial intelligence into their nation’s security infrastructure will become overtaken by those who do. Businesses and governments who adopt AI with no clear structure are able to burn up limitless credits without gaining much competitive advantage, while those who don’t adopt AI at all are in an even more uncompetitive situation. Gaining power in our world now requires the effective use of artificial intelligence. While this is necessary where survival comes into play, we risk drifting towards a world where our best ideas remain forever in Plato’s form land. AI allows far more effective execution, and a subset of ideation, but it must be recognized that most machine learning models share one form of intelligence. They are able to offer recommendations given their specific metacognitive personalities, but they are analogous to a single person with a real personality with access to almost limitless knowledge. Regardless of how much that person knows, they will still prioritize and approach problems in the way that is native to them. Much of entrepreneurship and creativity in the world arises from the intersection of different people’s tacit knowledge. Limited in intelligence and information, each person’s unique collection of pre-informational analytical traits allows them to approach the world in unique ways that allow discoveries that could not arise from millions of AI agents that share one fundamental cognitive structure.
AI can be helpful at providing a subset of all the good ideas with which to approach your situation, but often the things that work best in the world at first seem inadvisable or the information needed to make the right decision is radically unknown. AI is able to prototype effectively in some situations dealing with unknowns, but there are some goals and situations that do not have as many reference points that can be used to draw conclusions about them, and thus in them, there is more space for cognitive uniqueness. While artificial intelligence can certainly offer ideas that seem unique, it is for the most part more tied into situations already seen before. People making the wrong decision can often lead to good results, and missing out on those wrong decisions could lead to a future governed by risk aversion and unable to fail into discovery. Additionally, the legal risks of AI force it to be risk-averse rather than suggest things that might be right for a person in their specific situation. Essentially, AI can help us create good ideas, but it will systematically err towards a certain subset of those ideas, just by the very nature of having one cognitive structure rather than the numerous ones that are found in actual people.…