## Algorithms: Plus One Game.

### November 12, 2010

I woke up today with a game stuck in my head.

**The Game: **Player A picks a number from 1 to 10. Player B guesses a number. If Player B’s guess matches Player A’s number, then Player B wins. If not, then Player A adds 1 to his number and the game continues.

This is not a difficult game to play. The interesting thing here is that even though there is a winning strategy (Player B simply guesses “10” every time; eventually he will win.) the game could theoretically go on forever — say Player A picked “2” but Player B always guesses “1.”

The question is, *how quick can we guarantee a win for Player B?*

The first solution I thought of was the following: have Player B guess “5” for five turns. If he doesn’t get it after five turns, then have him guess “15” for the next five turns. After a bit of thinking, though, this is not actually any better than Player B just guessing “10” every time.

**First Question: Is this the best we can do?**

** **

Now let’s make the game a bit more interesting, because (at this point) the game has a max value and a winning strategy for player B which is obvious.

**The (Harder) Game: **Player A guesses any natural number. Player B guesses a natural number, and if it matches Player A’s, then player B wins. If not, Player A adds 1 to his number and the game continues.

This game has a similar feel, but an infinite twist. There is no longer a maximum value, so our original winning strategy does not work. Nonetheless, is there a finite or countably infinite “optimal” strategy for player B?

If we were able to prove that the average number of turns for each finite game (the first game, and ones with a maximum value ) is , then it would be: just take a limit. I have a feeling that this first game can be proved to have such an optimal strategy using some kind of discrete math proof, but it’s been a while. Anyone want to take a swing at this?

(**Note: **Brooke came up with a winning strategy for player B for the infinite case, and the solution is posted on the next post. It turns out that even in the infinite case this is not a very interesting game and there is actually a *finite* winning strategy for player B. It’s probably exactly what you think it is: go up in multiples of 2’s starting at the beginning.)

Does player A learn of player B’s guess on the previous turn? This seems critical, player since A could detect patterns in player B’s guesses. How do you arrive at (M-1)/2 as the minimum amount of guesses needed by player B?

I don’t actually think it matters if player A knows player B’s guesses, since player A must always move 1 up anyway — in the following post (the graph one) player A definitely needs to know player B’s moves. And I didn’t mean that was the minimum, I meant that was around the average — but I didn’t really say that, so I’ve fixed that in the post!