Introduction - What is an agent?

 Perception

  sensors

 Action

  effectors

 

How Agents Should Act - Performance Measurement

 Measurement cannot require omniscience

  eg. Door falls off jet, hits you

 Timing issues

  Average vs. discounted vs. total

  Optimize in the short vs long run?

 Ideal Agent

  For any percept sequence, chooses the

   action that maximizes the value

   of the performance measure

 Policy

  A mapping from percept sequences to actions

 Goal

  Design agents which implement or learn good policies

Agent Structure - How policies are implemented

 Synthesis vs. Analysis

  Why synthetic?

   Perception eg. all possible views are same object

  Why analytical

   Not every situation is treated the same way

 Tables - Analysis

  Directly map percepts to actions

  Problems

   Space

   Anticipation of all scenarios

  Solutions

   Compact Representations - synthesis

    Many percepts dictate same action

     Markov Environments

    Many percepts are irrelevant

   Often tables can be calculated dynamically

    eg. branchy, shallow tree

 

Agents and Environments