84 lines
2.9 KiB
Plaintext
84 lines
2.9 KiB
Plaintext
= AI =
|
|
|
|
AI or artifical intelligence is the art of making intelligent machines.
|
|
|
|
== Goals ==
|
|
|
|
* Expert systems
|
|
* exhibit itlligent behaviours in one field, (IE chess AIs beat humans)
|
|
* Human intelligence
|
|
* behave, think, and learn like humans
|
|
|
|
== Types ==
|
|
|
|
* AI
|
|
* Machines which mimic human behaviour
|
|
* not even close as of currently
|
|
* [[machine_learning]]
|
|
* Allow machines to find statistical relationships in data to mimic patterns
|
|
* Deep learning
|
|
* Subset of Machine Learning, makes multilayer neural networks feasible
|
|
|
|
== Agents ==
|
|
|
|
An agent is anything that perceives its enviroment thorugh some sensors and
|
|
acts upon the enviroment throught some actuators.
|
|
|
|
Some types of agents
|
|
|
|
* Simple reflex
|
|
* Make decision based on current state of percepts, ignores previous state
|
|
* Only work in fully observable enviroment
|
|
* Model-based reflex agent
|
|
* Create a model of what is happening in the world, and based upon this
|
|
model, make decisions
|
|
* Work in partially obervable enviroment
|
|
* Goal based agents
|
|
* Knowldege of enviroment is not always best info to know what to do
|
|
* Agent knows its goal and desirable situations
|
|
* Expansion of model based agent by having a goal to work to as well as a
|
|
model of the world.
|
|
* utility agesnts
|
|
* A goal based agent, but have a utility score that meansures how close the
|
|
given state is to the end goal
|
|
* Act based on goal as well as best way to get to said goal.
|
|
* learning agents
|
|
* Agent taht learns from previous experiences. It starts to act with basic
|
|
knowledge and then able to act and adapt automatically through learning
|
|
|
|
Example, taxi driver,
|
|
|
|
* Precepts
|
|
* Cameras, GPS, microphone, etc
|
|
* Action
|
|
* Steet, accelerate, break, etc
|
|
* Goal
|
|
* safe, fast legal comfortable trip, maxmize profit
|
|
* Enviroment
|
|
* Roads, traffic, pedestrians, customers
|
|
|
|
Any part of the enviroment that is fully observable is called a fully obersable
|
|
enviroment. Else it is partially observalble. Most aspects of the enviroment
|
|
are only partially observable.
|
|
|
|
A stocahstic enviroment is random in nature and cannot be determined completely
|
|
by an agent.
|
|
|
|
In a deterministic enviroment that is fully obersvable, agents do not need to
|
|
account for random chance.
|
|
|
|
If the enviroment can change while the agent is deliberating/computing what to
|
|
do, then the enviroment is dynamic. Otherwise, it is static. Static enviroments
|
|
are much easier to deal with, because an agent must not constantly deliberate
|
|
on when to change course suddenly.
|
|
|
|
If the number of possible states that can be percieved is bounded, then the
|
|
enviroment is called a discrete enviroment, otherwise it is a continous
|
|
enviroment. A chess game is discrete, because there is a finite number of moves
|
|
that cna be played.
|
|
|
|
If there is only one agent in the enviroment, then it is called a single agent
|
|
enviroment. HOwever if mulitple agents are operating in the enviroment, then it
|
|
is a multi agent enviroment. The design problems are much harder in a
|
|
multiagent enviroment.
|