2022-04-04 16:45:01 +00:00
|
|
|
= AI =
|
|
|
|
|
|
|
|
AI or artifical intelligence is the art of making intelligent machines.
|
|
|
|
|
|
|
|
== Goals ==
|
|
|
|
|
|
|
|
* Expert systems
|
2022-04-04 17:00:02 +00:00
|
|
|
* exhibit itlligent behaviours in one field, (IE chess AIs beat humans)
|
2022-04-04 16:45:01 +00:00
|
|
|
* Human intelligence
|
|
|
|
* behave, think, and learn like humans
|
|
|
|
|
|
|
|
== Types ==
|
|
|
|
|
2022-04-04 17:00:02 +00:00
|
|
|
* AI
|
|
|
|
* Machines which mimic human behaviour
|
|
|
|
* not even close as of currently
|
2022-04-04 16:45:01 +00:00
|
|
|
* [[machine_learning]]
|
2022-04-04 17:00:02 +00:00
|
|
|
* Allow machines to find statistical relationships in data to mimic patterns
|
|
|
|
* Deep learning
|
2022-04-04 17:15:01 +00:00
|
|
|
* Subset of Machine Learning, makes multilayer neural networks feasible
|
|
|
|
|
2022-04-06 17:00:01 +00:00
|
|
|
== Agents ==
|
2022-04-04 17:00:02 +00:00
|
|
|
|
2022-04-06 17:00:01 +00:00
|
|
|
An agent is anything that perceives its enviroment thorugh some sensors and
|
|
|
|
acts upon the enviroment throught some actuators.
|
|
|
|
|
|
|
|
Some types of agents
|
|
|
|
|
|
|
|
* Simple reflex
|
2022-04-06 17:15:01 +00:00
|
|
|
* Make decision based on current state of percepts, ignores previous state
|
|
|
|
* Only work in fully observable enviroment
|
2022-04-06 17:00:01 +00:00
|
|
|
* Model-based reflex agent
|
2022-04-06 17:15:01 +00:00
|
|
|
* Create a model of what is happening in the world, and based upon this
|
|
|
|
model, make decisions
|
|
|
|
* Work in partially obervable enviroment
|
2022-04-06 17:00:01 +00:00
|
|
|
* Goal based agents
|
2022-04-06 17:15:01 +00:00
|
|
|
* Knowldege of enviroment is not always best info to know what to do
|
|
|
|
* Agent knows its goal and desirable situations
|
2022-04-06 18:30:01 +00:00
|
|
|
* Expansion of model based agent by having a goal to work to as well as a
|
|
|
|
model of the world.
|
2022-04-06 17:00:01 +00:00
|
|
|
* utility agesnts
|
2022-04-06 18:30:01 +00:00
|
|
|
* A goal based agent, but have a utility score that meansures how close the
|
|
|
|
given state is to the end goal
|
|
|
|
* Act based on goal as well as best way to get to said goal.
|
2022-04-06 17:00:01 +00:00
|
|
|
* learning agents
|
2022-04-06 18:30:01 +00:00
|
|
|
* Agent taht learns from previous experiences. It starts to act with basic
|
|
|
|
knowledge and then able to act and adapt automatically through learning
|
2022-04-06 17:00:01 +00:00
|
|
|
|
|
|
|
Example, taxi driver,
|
|
|
|
|
|
|
|
* Precepts
|
|
|
|
* Cameras, GPS, microphone, etc
|
|
|
|
* Action
|
|
|
|
* Steet, accelerate, break, etc
|
|
|
|
* Goal
|
|
|
|
* safe, fast legal comfortable trip, maxmize profit
|
|
|
|
* Enviroment
|
|
|
|
* Roads, traffic, pedestrians, customers
|
2022-04-06 17:15:01 +00:00
|
|
|
|
|
|
|
Any part of the enviroment that is fully observable is called a fully obersable
|
|
|
|
enviroment. Else it is partially observalble. Most aspects of the enviroment
|
|
|
|
are only partially observable.
|
|
|
|
|
|
|
|
A stocahstic enviroment is random in nature and cannot be determined completely
|
|
|
|
by an agent.
|
|
|
|
|
|
|
|
In a deterministic enviroment that is fully obersvable, agents do not need to
|
|
|
|
account for random chance.
|
|
|
|
|
|
|
|
If the enviroment can change while the agent is deliberating/computing what to
|
|
|
|
do, then the enviroment is dynamic. Otherwise, it is static. Static enviroments
|
|
|
|
are much easier to deal with, because an agent must not constantly deliberate
|
|
|
|
on when to change course suddenly.
|
|
|
|
|
|
|
|
If the number of possible states that can be percieved is bounded, then the
|
|
|
|
enviroment is called a discrete enviroment, otherwise it is a continous
|
|
|
|
enviroment. A chess game is discrete, because there is a finite number of moves
|
|
|
|
that cna be played.
|
|
|
|
|
|
|
|
If there is only one agent in the enviroment, then it is called a single agent
|
|
|
|
enviroment. HOwever if mulitple agents are operating in the enviroment, then it
|
|
|
|
is a multi agent enviroment. The design problems are much harder in a
|
|
|
|
multiagent enviroment.
|