Search the whole station

人工智能导论代写 CS 188代写 贝叶斯网推理代写 Python代写

CS 188: Introduction to Artificial Intelligence, Spring 2021

Project 4: Inference in Bayes Nets

人工智能导论代写 In this project, you will implement inference algorithms for Bayes Nets, specifically variable elimination and value-of-perfect-information

Version 3.0. Last Updated: 03/16/2021

Due: Friday 4/2 at 11:59pm

Introduction

In this project, you will implement inference algorithms for Bayes Nets, specifically variable elimination and value-of-perfect-information computations. These inference algorithms will allow you to reason about the existence of invisible pellets and ghosts.

You can run the autograder for particular tests by commands of the form:

python autograder.py -t test_cases/q4/1-simple-eliminate

The code for this project contains the following files, available as a zip archive.

Files you’ll edit:
factorOperations.pyOperations on Factors (join, eliminate, normalize).
inference.pyInference algorithms (enumeration, variable elimination,likelihood weighting).
bayesAgents.pyPacman agents that reason under uncertainty.
Files you should read but NOT edit:
bayesNet.pyThe BayesNet and Factor classes.
Files you can ignore:
graphicsDisplay.pyGraphics for Pacman
graphicsUtils.pySupport for Pacman graphics
textDisplay.pyASCII graphics for Pacman
ghostAgents.pyAgents to control ghosts
keyboardAgents.pyKeyboard interfaces to control Pacman
layout.pyCode for reading layout files and storing their contents
autograder.pyProject autograder
testParser.pyParses autograder test and solution files
testClasses.pyGeneral autograding test classes
test_cases/Directory containing the test cases for each question
bayesNets2TestClasses.pyProject 4 specific autograding test classes

Files to Edit and Submit: 人工智能导论代写

You will fill in portions of factorOperations.py , inference.py , and bayesAgents.py during the assignment. Once you have completed the assignment, you will submit a token generated by submission_autograder.py . Please do not change the other files in this distribution or submit any of our original files other than this file.

Evaluation: Your code will be autograded for technical correctness. Please do not change the names of any provided functions or classes within the code, or you will wreak havoc on the autograder. However, the correctness of your implementation – not the autograder’s judgements – will be the final judge of your score. If necessary, we will review and grade assignments individually to ensure that you receive due credit for your work.

Academic Dishonesty: We will be checking your code against other submissions in the class for logical redundancy. If you copy someone else’s code and submit it with minor changes, we will know. These cheat detectors are quite hard to fool, so please don’t try. We trust you all to submit your own work only; please don’t let us down. If you do, we will pursue the strongest consequences available to us.

Getting Help: You are not alone! If you find yourself stuck on something, contact the course staff for help. Office hours, section, and the discussion forum are there for your support; please use them. If you can’t make our office hours, let us know and we will schedule more. We want these projects to be rewarding and instructional, not frustrating and demoralizing. But, we don’t know when or how to help unless you ask.

Discussion: Please be careful not to post spoilers.

Treasure-Hunting Pacman 人工智能导论代写

Pacman has entered a world of mystery. Initially, the entire map is invisible. As he explores it, he learns information about neighboring cells. The map contains two houses: a ghost house, which is probably mostly red, and a food house, which is probably mostly blue. Pacman’s goal is to enter the food house while avoiding the ghost house.

Pacman will reason about which house is which based on his observations, and reason about the tradeoff between taking a chance or gathering more evidence. To enable this, you’ll implement probabilistic inference using Bayes nets.

To play for yourself, run:

python hunters.py -p KeyboardAgent -r

Bayes Nets and Factors

First, take a look at bayesNet.py to see the classes you’ll be working with – BayesNet and Factor .

You can also run this file to see an example BayesNet and associated Factors : python bayesNet.py

You should look at the printStarterBayesNet function – there are helpful comments that can make your life much easier later on.

The Bayes Net created in this function is shown below: 人工智能导论代写

(Raining –> Traffic <– Ballgame)

A summary of the terminology is given below:

  • Bayes Net : This is a representation of a probabilistic model as a directed acyclic graph and a set of conditional probability tables, one for each variable, as shown in lecture. The Traffic Bayes Net above is an example.
  • Factor : This stores a table of probabilities, although the sum of the entries in the table is not necessarily 1. A factor is of the general form P(X1 , . . . Xm, y1 , . . . , yn|Z1 , . . . ,Zp, w1 , . . . , wq) . Recall that lower case variables have already been assigned. For each possible assignment of values to the Xi and Zj variables, the factor stores a single number. The Zj,wk variables are said to be conditioned while the Xi,yl variables are unconditioned.
  • Conditional Probability Table (CPT): This is a factor satisfying two properties:
  1. Its entries must sum to 1 for each assignment of the conditional variables
  2. There is exactly one unconditioned variable

The Traffic Bayes Net stores the following CPTs: P(Raining) , P(Ballgame) , P(Traffic|Ballgame, Raining).

Question 1 (3 points): Bayes Net Structure

Implement the constructBayesNet function in BayesAgents.py . It constructs an empty Bayes net with the structure described below. (We’ll specify the actual factors in the next question.)

The treasure hunting world is generated according to the following Bayes net:

人工智能导论代写
人工智能导论代写

Don’t worry if this looks complicated! We’ll take it step by step. As described in the code for constructBayesNet , we build the empty structure by listing all of the variables, their values, and the edges between them. This figure shows the variables and the edges, but what about their values?

  • X positions determine which house goes on which side of the board. It is either food-left or ghost-left.
  • Y positions determine how the houses are vertically oriented. It models the vertical positions of both houses simultaneously, and has one of four values: both-top, both-bottom, left-top, and left-bottom.”left-top”is as the name suggests: the house on the left side of the board is on top, and the house on the right side of the board is on the bottom.
  • Food house and ghost house specify the actual positions of the two houses. They are both deterministic functions of”X positions” and “Y positions”.
  • The observations are measurements that Pacman makes while traveling around the board. Note that there are many of these nodes—one for every board position that might be the wall of a house. If there is no house in a given location, the corresponding observation is none; otherwise it is either red or blue, with the precise distribution of colors depending on the kind of house.

Grading: To test and debug your code, run

python autograder.py -q q1

Question 2 (1 point): Bayes Net Probabilities 人工智能导论代写

Implement the fillYCPT function in bayesAgents.py . These take the Bayes net you constructed in the previous problem, and specify the factors governing the Y position variables. (We’ve already filled in the X position, house, and observation factors for you.)

Here’s the structure of the Bayes net again:

For an example of how to construct factors, look at the implementation of the factor for X positions in fillXCPT .

The Y positions are given by values BOTH_TOP_VAL, BOTH_BOTTOM_VAL, LEFT_TOP_VAL, LEFT_BOTTOM_VAL . These variables, and their associated probabilities PROB_BOTH_TOP, PROB_BOTH_BOTTOM, PROB_ONLY_LEFT_TOP, PROB_ONLY_LEFT_BOTTOM , are provided by constants at the top of the file.

If you’re interested, you can look at the computation for house positions. All you need to remember is that each house can be in one of four positions: top-left, top-right, bottom-left, or bottom-right.

Hint: There are only four entries in the Y position factor, so you can specify each of those by hand.

Grading: To test and debug your code, run

python autograder.py -q q2

Question 3 (5 points): Join Factors 人工智能导论代写

Implement the joinFactors function in factorOperations.py . It takes in a list of Factor s and returns a new Factor whose probability entries are the product of the corresponding rows of the input Factor s.

joinFactors can be used as the product rule, for example, if we have a factor of the form P(X|Y ) and another factor of the form P(Y ) , then joining these factors will yield P(X, Y ) . So, joinFactors allows us to incorporate probabilities for conditioned variables (in this case, Y). However, you should not assume that joinFactors is called on probability tables — it is possible to call joinFactors on Factors whose rows do not sum to 1.

Grading: To test and debug your code, run

python autograder.py -q q3

It may be useful to run specific tests during debugging, to see only one set of factors print out. For example, to only run the first test, run:

python autograder.py -t test_cases/q3/1-product-rule

Hints and Observations:

  • Your joinFactors should return a new Factor .
  • Here are some examples of what joinFactors can do:
  1. joinFactors( P(X|Y ) , P(Y ) ) = P(X, Y )
  2. joinFactors( P(V , W|X, Y ,Z) , P(X, Y |Z) ) = P(V , W, X, Y |Z)
  3. joinFactors( P(X|Y ,Z) , P(Y ) ) = P(X, Y |Z)
  4. joinFactors( P(V |W) , P(X|Y ) , P(Z) ) = P(V , X,Z|W, Y )
  • For a general joinFactors operation, which variables are unconditioned in the returned Factor? Which variables are conditioned?
  • Factors store a variableDomainsDict , which maps each variable to a list of values that it can take on (its domain). A Factor gets its variableDomainsDict from the BayesNet from which it was instantiated. As a result, it contains all the variables of the BayesNet , not only the unconditioned and conditioned variables used in the Factor . For this problem, you may assume that all the input Factor s have come from the same BayesNet , and so their variableDomainsDicts are all the same.

Question 4 (4 points): Eliminate 人工智能导论代写

Implement the eliminate function in factorOperations.py . It takes a Factor and a variable to eliminate and returns a new Factor that does not contain that variable. This corresponds to summing all of the entries in the Factor which only differ in the value of the variable being eliminated.

Grading: To test and debug your code, run

python autograder.py -q q4

It may be useful to run specific tests during debugging, to see only one set of factors print out. For example, to only run the first test, run:

python autograder.py -t test_cases/q4/1-simple-eliminate

Hints and Observations:

  • Your eliminate should return a new Factor .
  • eliminate can be used to marginalize variables from probability tables. For example:
  1. eliminate( P(X, Y |Z) , Y ) = P(X|Z)
  2. eliminate( P(X, Y |Z) , X ) = P(Y |Z)
  • For a general eliminate operation, which variables are unconditioned in the returned Factor? Which variables are conditioned?
  • Remember that Factor s store the variableDomainsDict of the original BayesNet , and not only the unconditioned and conditioned variables that they use. As a result, the returned Factor should have the same variableDomainsDict as the input Factor .

Question 5 (4 points): Normalize 人工智能导论代写

Implement the normalize function in factorOperations.py . It takes a Factor as input and normalizes it, that is, it scales all of the entries in the Factor such that the sum of the entries in the Factor is 1. If the sum of probabilities in the input factor is 0, you should return None .

Grading: To test and debug your code, run

python autograder.py -q q5

It may be useful to run specific tests during debugging, to see only one set of factors print out. For example, to only run the first test, run:

python autograder.py -t test_cases/q5/1-preNormalized

Hints and Observations:

  • Your normalize should return a new Factor .
  • normalize does not affect probability distributions (since probability distributions must already sum to 1).
  • For a general normalize operation, which variables are unconditioned in the returned Factor ? Which variables are conditioned? Make sure to read the docstring of normalize for more instructions. In particular, pay attention to the treatment of unconditioned variables with exactly one entry in their domain.
  • Remember that Factors store the variableDomainsDict of the original BayesNet, and not only the unconditioned and conditioned variables that they use. As a result, the returned Factor should have the same variableDomainsDict as the input Factor .

Question 6 (4 points): Variable Elimination 人工智能导论代写

Implement the inferenceByVariableElimination function in inference.py . It answers a probabilistic query, which is represented using a BayesNet, a list of query variables, and the evidence.

Grading: To test and debug your code, run

python autograder.py -q q6

It may be useful to run specific tests during debugging, to see only one set of factors print out. For example, to only run the first test, run:

python autograder.py -t test_cases/q6/1-disconnected-eliminate

Hints and Observations:

  • The algorithm should iterate over hidden variables in elimination order, performing joining over and eliminating that variable, until the only the query and evidence variables remain.
  • The sum of the probabilities in your output factor should sum to 1 (so that it is a true conditional probability, conditioned on the evidence).
  • Look at the inferenceByEnumeration function in inference.py for an example on how to use the desired functions. (Reminder: Inference by enumeration fifirst joins over all the variables and then eliminates all the hidden variables. In contrast, variable elimination interleaves join and eliminate by iterating over all the hidden variables and perform a join and eliminate on a single hidden variable before moving on to the next hidden variable.)
  • You will need to take care of the special case where a factor you have joined only has one unconditioned variable (the docstring specifies what to do in greater detail).

Question 7 (1 point): Marginal Inference 人工智能导论代写

Inside bayesAgents.py , use the inference.inferenceByVariableElimination function you just wrote to complete the function getMostLikelyFoodHousePosition . This function should compute the marginal distribution over positions of the food house, then return the most likely position.

The return value should be a dictionary containing a single key-value pair, {FOOD_HOUSE_VAR: best_house_val} , where best_house_val is the most likely position from HOUSE_VALS . This is used by Bayesian Pacman, who wanders around randomly collecting information for a fixed number of timesteps, then heads directly to the house most likely to contain food.

Grading: To test and debug your code, run

python autograder.py -q q7

Hint: You may find Factor.getProbability(…) and Factor.getAllPossibleAssignmentDicts(…) to be useful.

Question 8 (3 points): Value of PerfectInformation 人工智能导论代写

Bayesian Pacman spends a lot of time wandering around randomly, even when further exploration doesn’t provide any additional value. Can we do something smarter?

We’ll evaluate VPI Pacman in a more restricted setting: everything in the world has been observed, except for the colors of one of the houses’ walls. VPI Pacman has three choices:

  1. immediately enter the already-explored house
  2. immediately enter the hidden house
  3. explore the outside of the hidden house, and then make a decision about where to go

Part(a)

First look at computeEnterValues . This function computes the expected value of entering the top left and top right houses. Again, you can run the inference function you already wrote, on the bayes net  self.BayesNet to do all the heavy lifting here. First compute p(foodHouse = topLeft and ghostHouse = topRight | evidence) and p(foodHouse = topRight and ghostHouse = topLeft | evidence). Then use these two probabilities to compute expected rewards for entering the top left or top right houses. Use WON_GAME_REWARD and GHOST_COLLISION_REWARD as the reward for entering the foodHouse and ghostHouse, respectively.

Part(b) 人工智能导论代写

Next look at computeExploreValue . This function computes the expected value of exploring all of the hidden cells, and then making a decision. We’ve provided a helper method, getExplorationProbsAndOutcomes , which returns a list of future observations Pacman might make, and the probability of each. To calculate the value of the extra information Pacman will gain, you can use the following formula:

E[value of exploration] = ∑p(evidence) maxactions E[value of action|evidence]

Note that E[value of action|evidence] is exactly the quantity computed by computeEnterVals , so to compute the value of exploration, you can call computeEnterValues again with the hypothetical evidence provided by getExplorationProbsAndOutcomes .

Grading: To test and debug your code, run

python autograder.py -q q8

Hint:

After exploring, Pacman will again need to compute the expected value of entering the left and right houses. Fortunately, you’ve already written a function to do this! Your solution to computeExploreValue can rely on your solution to computeEnterValues to determine the value of future observations.

Submission

In order to submit your project, run python submission_autograder.py and submit the generated token file bayesnet.token to the Project 4 assignment on Gradescope.

人工智能导论代写
人工智能导论代写

更多代写:美国大学cs代考  槍手代考英文  英国靠谱代写  论文引用格式有哪些  反思日记写作步骤  基础设施政策代写

合作平台:essay代写 论文代写 写手招聘 英国留学生代写

The prev: The next:

Related recommendations

1
您有新消息,点击联系!