+ |
= |
We plan to implement a robotic black jack dealer using the Robix Rascal robotic arm system in combonation with a digital web camera. It's movement will be predetermined for the initial dealing process, but will then respond to user input to complete the game. The most chalenging part will be to recognize what cards are dealt in order to know if a user has won or lost. Before we get into that though, we will focus on achieving desired movement.
Emulating a Blackjack dealer is an ideal task for a simple robot, in fact a human Blackjack dealer is little more than a biological robot, following a strict set of rules that are clearly laid out. These rules can easily be programmed into a computer simulation, and this has been done hundreds of times in the past, for computer Blackjack games, since all that is needed for this to happen is a set of if else statements, it is almost trivial. The challenge behind our robotic dealer will not be with the rules, but with the physical aspects of the game, moving and recognizing the cards.
The project will be split into 3 parts:
The first part will be to build an arm that suits our needs and achieve desired motions with the arm. The arm needs to be able to grasp a card, bring it in front of a camera to be identified, spin the card around to be viewed by a player, and move cards to desired positions. Because of this, most motion should be in the horizontal plane. We should right C functions to perform these basic motions, and write a simple proof of concept, perhaps picking up a lone card, spinning it around and dropping in front of a player.
The second phase of the project will involve determining a way to use the camera to determine what cards are being dealt and getting the robotic arm to bring cards to the correct position in front of the camera. The camera, a dlink webcam can take moderate resolution pictures, but we currently can not find any libraries that will allow us to control the camera directly for direct capture. Thus, we might have to use some indirect method of reading files.
The third phase of the project is to integrate the first two phases so that the robot will be able to deal a game of blackjack. This will involve dealing a hand both to the player and to the dealer, accepting the player's desire to recieve cards (at the very least, perhaps including splits and double downs), determining if the player has busted. If not, determinin if the dealer needs to hit, and if the dealer has busted (which will involve flipping the dealer's cards over from a table) and determining a winner. The process of dealing off of the deck will be easiest with a dealer's shoe. In that case, the motions of the arm maybe sufficient, if not, we might have to use a dc motor connected to one of the voltage out ports on the rascal (there are 2 general purpose outputs) to spin a cylinder that contacts the card and pulls it off of the shoe so that the arm can grasp it.
While Rodney Brook's subsumption model is very elegant, it won't fit with our
robot very well, as the behavior of our robot needs to be planned out well in
advanced, ie, if this happens do this.
Gat's three layer architechture is essentially how our robot works, there is
the low level stuff that the robix software takes care of
(pid control and such), higher level movement, which we wrote the code for,
and finally very high level stuff, such as the recognition of cards using
vision.
Though it may not seem like it, our robot shares some motion characteristics
with the gastrointestinal robot, two of our motors will move in conjunction
in an accordion type fashon.
The robot arm has been designed to have both lateral and vertical motion. The current incarnation is attached to a pedestal with a servo, giving it 180 degrees of rotation around the pedestal. In addition, a second motor is in the lateral configuration, giving the arm a 270 degree span that it can reach. Two motors are used to control the vertical height, and when moved together, allow the arm to raise and lower itself without rotating the grasping claw. Since the motors only provide rotational force, a single joint would provide the ability to raise and lower the claw, but the claw would rotate 90 degrees when it was raised and lowered. We wanted the claw to stay at the same relative angle, no matter the orientation of the arm, to allow us to easily flip card over with it. The arm is terminated by a grasping claw that can rotate 180 degrees from a vertical to vertical configuration, allowing it to hold cards horizontally. The claw can open and close.
This configuration should allow for the arm to grasp a card, hold it in front of a camera to view it, flip it over for the player to see, and drop the card in front of the player.
The Robix software includes c++ libraries to control the robot. It provides a header file and a Readme which document all of the functions in the library and a sample Visual Studio project file. This project compiles under Visual Studio .NET, but the instructions, written for an earlier Visual Studio, on how to start a new project included in the documentation are out of date. In order to get a simple project to compile, we needed to perform the following steps:
Once these steps were completed, the project finally was able to read the Robix c++ libraries and compile them. Interfacing with the arm was not especially hard once we determined that the command to execute a script didn't work as expected and instead using macros in the script (which differ slightly than just using the script) seems to be the easiest way to control the robot. Our current plan to control the robot is to write a series of macros to control basic functions of the robot, opening and closing the claw for example, followed by wrapper functions for these macros in C++. We will use macros read from a file rather than accessing the hardware directly (which is possible) because the methods for controlling the robot directly are not as robust and easy to use as the macro functions, especially when trying to use multiple motors at once.
The vision part of this project has proven to be more challenging than we first
anticipated. At first we were using an older D-link model webcam to try and
acquire images, however after a week or so of work we ditched it as support and
getting it to work was sketchey at best. After dumping the D-link we moved on
to a Veo Stingray webcam, which offered much better support. Additionally the
company offers an SDK, it allows for simple use of function calls to activate
the camera, start it up, take snapshots or video, and shut the camera down.
Additonal problems arose from Microsoft's Visual Studio .NET IDE.
Since most of the
sample code that we were able to find on the web was compiled with earlier
versions of visual studio they didn't want to work, or compile, with the .NET
version of Visual Studio.
Eventually we were able to get around these problems and get data from the
webcam. We are currently having the webcam take a snapshot using Veo's API,
this snapshot is saved to the hard drive as a .bmp file, we were unable to get
the raw data straight from the camera. As such the way that capture will work
is that we will take a snapshot using the SDK, use an image class to read the
data from the .bmp file on disk, and then do analysis once we have read the
image data.
Currently due to the difficulties that we have had getting our webcam to work
we wern't able to get any image analysis done, but as that is one of the only
parts of our project left, along with coordinating movement, it should make
for a good part 3 to our project recognizing the cards and telling the arm
where to move based on that.