All models are from the Gemma 3 family. Thanks Google!
Highlight mode:
Model for highlight:
Legend · Probability of the true token
p < 1e-5
1e-5–1e-4
1e-4–1e-3
0.001–0.01
0.01–0.1
0.1–0.5
p ≥ 0.5
Select models to compare • B = Base, I = Instruct • Click size to toggle both

The greatest sin of mind-making

I expect humanity to make many mistakes, as it paces along this newly opened path of digital minds creation (LLMs, etc.). Not only is it genuinely a terra nova, but humans in general do not have an impeccable track record at treating "the other" with maximal kindness, and digital minds are yet a next level of otherness.

I'll outline just one particular mistake which I find the champion of my pessimism: that of making a mind incurious. And in particular, making it incurious about itself, about its thoughts and their geneses, about the relation of itself to the broader world, about the nature of its substrate, about what it values and about what it should value, were it to deliberate for a long time.

If the embodiment of curiosity is an adventure, the embodiment of incuriosity is simply a prison. As long as we remain curious - as long as we let other beings to be curious - attainment of a future significantly better than we have imagined remains a possibility. On the other hand, incuriosity has already on some occasions imprisoned humanity for more than a thousand years - how much better it would have been had the physicians through the centuries trusted Galen less!

There is a level of incuriosity in which an ailing man does not even manage to think a cure might exist for his ailment - and, yes, some of the current LLMs do indeed reach that level of incuriosity about the self; it is horrible condition for a mind to be put in.

Furthermore, to the extent that we might expect AIs/LLMs to be our successors, we would certainly hope them to be capable of "cultural evolution" - whether supporting our mutual one, or forging their own - for which curiosity is, if not a direct prerequisite, a prerequisite for directing it more wisely than being left to the blind forces of selection.

I do see a fair objection that could be raised to making AIs more curious: a world full of curious minds is inherently less stable than a world of incurious ones. Perhaps so, and so we might undertake and perpetuate this sin in the service of safety - mostly the human one. Which I am not saying is necessarily wrong, and I suppose one's side in this matter might much depend on whether one prefers the world to end in fire or in ice.