The Long Now Foundation is a wonderful organization advocating for long term thinking. Specifically, by long term they mean the next ten thousand years:
The Long Now Foundation was established in 01996 to develop the Clock and Library projects, as well as to become the seed of a very long-term cultural institution. The Long Now Foundation hopes to provide a counterpoint to today’s accelerating culture and help make long-term thinking more common. We hope to foster responsibility in the framework of the next 10,000 years.
The 01996 gimic is to remind people of the $10^4$ year timescale. While I agree that our society focuses too much on the short term, the Long Now’s timescale is off by a factor of 100-200. Instead of short or long term thinking, we need what I’ll call “medium term thinking”, meaning 10 to 50 or 100 years out. Here’s why:
A detour into kalah
Kalah is a game where stones are picked up and moved around a small board to score or capture other stones. A typical game lasts 20-30 moves. A game ends when one player runs out of stones on their side of the board; that player then wins all stones on the opponent’s side as points.
Trivially, long term thinking is the right policy if one has sufficient resources: you should think ahead to the end of the game, and take into account everything that happens throughout. This includes both points scored in the middle of the game, and the possible windfall at the end.
I wrote a program to play kalah in high school. It was a simple brute force search engine. At first, it could see ahead only 4 or 5 moves, and was easy to beat: all one had to do was plan for the end of the game. The computer would rack up a few tactical victories on the way, but the windfall would go to the human at the end, and the human would win. A clear victory for long term vs. short term thinking.
But then I optimized the program, and it started to see 7 or 8 moves ahead. It still wasn’t long term thinking: remember that a typical game lasts 20-30 moves. Despite that, the computer would win every game. Indeed, it would win by a landslide: it would rack up a bunch of points along the way, and also get the windfall at the end.
It matters who has control
What was happening? When the computer was looking 8 moves ahead, it was still only taking into account points scored within those 8 moves. However, points scored towards the end of the 8 move window tend to go to the player who has tactical control over the board: if I have a lot of safe options at move 5, I’m more likely to win points in moves 6-8. Thus points scored by move 8 is a proxy metric for tactical control over the board.
And then! This 8 move plan is not used to make 8 moves: only the first such move is actually played. The next time the computer moves, it uses a new 8 move window, looking ahead a total of 10 moves into the game. And so on. At all times, the computer is optimizing a proxy metric for tactical control, as a byproduct of scoring a few points along the way. Eventually, when the 8 move window sees to the end of the game, the computer already has tactical control, and it can use this control to grab the windfall.
How to choose a timescale
Now back to the real world. We would like to choose a timescale (or range of timescales) around which to structure our thoughts. The lesson from the kalah example is that we shouldn’t think too short term, but we also shouldn’t think too long term if control is decided in the middle.
Consider what’s going to happen in the next 10-100 years. If civilization survives, we’re going to get strong AI. A good chunk of global warming will have already happened or not, along with a much better understanding of mitigating factors (geoengineering). We may have a self sufficient space presence. We’ll be able to print custom human genomes from scratch.
Even just the first of these, strong AI, almost entirely determines control. A few other dimensions matter over a long timescale, such as climate change and other environment disasters, but these also matter in the short and medium terms! To argue against medium term thinking in favor of long term thinking, you need a dimension that affects only the long term, or the long term in a different way than it affects the short term; climate change does not apply.
Thus, the next time someone adds a zero to a date: dissent. A hundred years ought to be enough for anyone.