使用 Keras 训练贪婪蛇 AI - 从入门到进阶
import\u0020pygame\nimport\u0020random\nimport\u0020numpy\u0020as\u0020np\nfrom\u0020collections\u0020import\u0020deque\nfrom\u0020keras.models\u0020import\u0020Sequential\nfrom\u0020keras.layers\u0020import\u0020Dense\nfrom\u0020keras.optimizers\u0020import\u0020Adam\n\nSCREEN_SIZE\u0020=\u0020300\nSQUARE_SIZE\u0020=\u002010\n\nclass\u0020SnakeEnv:\n\u0020\u0020def\u0020__init__(self):\n\u0020\u0020\u0020\u0020pygame.init()\n\u0020\u0020\u0020\u0020self.screen\u0020=\u0020pygame.display.set_mode((SCREEN_SIZE, SCREEN_SIZE))\n\u0020\u0020\u0020\u0020pygame.display.set_caption('Snake-AI')\n\u0020\u0020\u0020\u0020self.reset()\n\u0020\u0020\n\u0020\u0020def\u0020reset(self):\n\u0020\u0020\u0020\u0020self.snake\u0020=\u0020[[SCREEN_SIZE//2, SCREEN_SIZE//2]]\n\u0020\u0020\u0020\u0020self.food\u0020=\u0020self.generate_food()\n\u0020\u0020\u0020\u0020self.direction\u0020=\u0020pygame.K_RIGHT\n\u0020\u0020\n\u0020\u0020def\u0020generate_food(self):\n\u0020\u0020\u0020\u0020x\u0020=\u0020random.randint(0, (SCREEN_SIZE-SQUARE_SIZE)//SQUARE_SIZE)*SQUARE_SIZE\n\u0020\u0020\u0020\u0020y\u0020=\u0020random.randint(0, (SCREEN_SIZE-SQUARE_SIZE)//SQUARE_SIZE)SQUARE_SIZE\n\u0020\u0020\u0020\u0020return\u0020[x, y]\n\u0020\u0020\n\u0020\u0020def\u0020step(self, action):\n\u0020\u0020\u0020\u0020if\u0020action\u0020==\u00200:\n\u0020\u0020\u0020\u0020\u0020\u0020self.direction\u0020=\u0020pygame.K_LEFT\n\u0020\u0020\u0020\u0020elif\u0020action\u0020==\u00201:\n\u0020\u0020\u0020\u0020\u0020\u0020self.direction\u0020=\u0020pygame.K_RIGHT\n\u0020\u0020\n\u0020\u0020\u0020\u0020if\u0020self.direction\u0020==\u0020pygame.K_LEFT:\n\u0020\u0020\u0020\u0020\u0020\u0020self.snake[0][0]\u0020-=\u0020SQUARE_SIZE\n\u0020\u0020\u0020\u0020elif\u0020self.direction\u0020==\u0020pygame.K_RIGHT:\n\u0020\u0020\u0020\u0020\u0020\u0020self.snake[0][0]\u0020+=\u0020SQUARE_SIZE\n\u0020\u0020\n\u0020\u0020\u0020\u0020reward\u0020=\u00200\n\u0020\u0020\u0020\u0020game_over\u0020=\u0020False\n\u0020\u0020\u0020\u0020if\u0020self.snake[0]\u0020==\u0020self.food:\n\u0020\u0020\u0020\u0020\u0020\u0020reward\u0020=\u002010\n\u0020\u0020\u0020\u0020\u0020\u0020self.food\u0020=\u0020self.generate_food()\n\u0020\u0020\u0020\u0020elif\u0020self.snake[0][0]\u0020<\u00200\u0020or\u0020self.snake[0][0]\u0020>=\u0020SCREEN_SIZE:\n\u0020\u0020\u0020\u0020\u0020\u0020game_over\u0020=\u0020True\n\u0020\u0020\n\u0020\u0020\u0020\u0020state\u0020=\u0020self.snake[0]\u0020+\u0020self.food\n\u0020\u0020\u0020\u0020next_state\u0020=\u0020np.array(state)\n\u0020\u0020\u0020\u0020return\u0020next_state, reward, game_over, next_state\n\u0020\u0020\nclass\u0020SnakeAI:\n\u0020\u0020def\u0020__init__(self):\n\u0020\u0020\u0020\u0020self.state_size\u0020=\u00205\n\u0020\u0020\u0020\u0020self.action_size\u0020=\u00202\n\u0020\u0020\u0020\u0020self.epsilon\u0020=\u00200\n\u0020\u0020\u0020\u0020self.gamma\u0020=\u00200.95\n\u0020\u0020\u0020\u0020self.lr\u0020=\u00200.001\n\u0020\u0020\u0020\u0020self.memory\u0020=\u0020deque(maxlen=1000)\n\u0020\u0020\u0020\u0020self.model\u0020=\u0020self.build_model()\n\u0020\u0020\n\u0020\u0020def\u0020build_model(self):\n\u0020\u0020\u0020\u0020model\u0020=\u0020Sequential()\n\u0020\u0020\u0020\u0020model.add(Dense(24, input_dim=self.state_size, activation='relu'))\n\u0020\u0020\u0020\u0020model.add(Dense(24, activation='relu'))\n\u0020\u0020\u0020\u0020model.add(Dense(self.action_size, activation='linear'))\n\u0020\u0020\u0020\u0020model.compile(loss='mse', optimizer=Adam(learning_rate=self.lr))\n\u0020\u0020\u0020\u0020return\u0020model\n\u0020\u0020\n\u0020\u0020def\u0020remember(self, state, action, reward, next_state, done):\n\u0020\u0020\u0020\u0020self.memory.append((state, action, reward, next_state, done))\n\u0020\u0020\n\u0020\u0020def\u0020replay(self, batch_size):\n\u0020\u0020\u0020\u0020if\u0020len(self.memory)\u0020<\u0020batch_size:\n\u0020\u0020\u0020\u0020\u0020\u0020return\n\u0020\u0020\n\u0020\u0020\u0020\u0020batch\u0020=\u0020random.sample(self.memory, batch_size)\n\u0020\u0020\u0020\u0020for\u0020state, action, reward, next_state, done\u0020in\u0020batch:\n\u0020\u0020\u0020\u0020\u0020\u0020target\u0020=\u0020reward\n\u0020\u0020\u0020\u0020\u0020\u0020if\u0020not\u0020done:\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020target\u0020=\u0020reward\u0020+\u0020self.gamma\u0020\u0020np.amax(self.model.predict(np.array([next_state]))[0])\n\u0020\u0020\u0020\u0020\u0020\u0020target_f\u0020=\u0020self.model.predict(np.array([state]))\n\u0020\u0020\u0020\u0020\u0020\u0020target_f[0][action]\u0020=\u0020target\n\u0020\u0020\u0020\u0020\u0020\u0020self.model.fit(np.array([state]), np.array(target_f), epochs=1, verbose=0)\n\u0020\u0020\n\u0020\u0020def\u0020get_action(self, state):\n\u0020\u0020\u0020\u0020if\u0020np.random.rand()\u0020<=\u0020self.epsilon:\n\u0020\u0020\u0020\u0020\u0020\u0020return\u0020random.randint(0,1)\n\u0020\u0020\u0020\u0020else:\n\u0020\u0020\u0020\u0020\u0020\u0020q_values\u0020=\u0020self.model.predict(np.array([state]))[0]\n\u0020\u0020\u0020\u0020\u0020\u0020return\u0020np.argmax(q_values)\n\n\nif\u0020__name__\u0020==\u0020"main":\n\u0020\u0020env\u0020=\u0020SnakeEnv()\n\u0020\u0020ai\u0020=\u0020SnakeAI()\n\u0020\u0020\n\u0020\u0020while\u0020True:\n\u0020\u0020\u0020\u0020for\u0020event\u0020in\u0020pygame.event.get():\n\u0020\u0020\u0020\u0020\u0020\u0020if\u0020event.type\u0020==\u0020pygame.QUIT:\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020pygame.quit()\n\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020exit()\n\u0020\u0020\n\u0020\u0020\u0020\u0020state\u0020=\u0020env.reset()\n\u0020\u0020\u0020\u0020done\u0020=\u0020False\n\u0020\u0020\u0020\u0020score\u0020=\u00200\n\u0020\u0020\n\u0020\u0020\u0020\u0020while\u0020not\u0020done:\n\u0020\u0020\u0020\u0020\u0020\u0020action\u0020=\u0020ai.get_action(state)\n\u0020\u0020\u0020\u0020\u0020\u0020next_state, reward, done, _\u0020=\u0020env.step(action)\n\u0020\u0020\u0020\u0020\u0020\u0020ai.remember(state, action, reward, next_state, done)\n\u0020\u0020\u0020\u0020\u0020\u0020ai.replay(32)\n\u0020\u0020\u0020\u0020\u0020\u0020state\u0020=\u0020next_state\n\u0020\u0020\u0020\u0020\u0020\u0020score\u0020+=\u0020reward\n\u0020\u0020\u0020\u0020print("Score:", score)
原文地址: https://www.cveoy.top/t/topic/qxLh 著作权归作者所有。请勿转载和采集!