FTC2023-2024: CenterStage

From TrcWiki
Revision as of 00:25, 26 April 2024 by Mikets (talk | contribs)
Jump to navigation Jump to search

Game Resources

Control Award

Autonomous objectives

Our Autonomous program scores up to 50 pts: (scoring purple and yellow pixels according to Team Prop position and parking at the backstage) or 55 pts: (in addition to the 50-pts auto, we pick up an extra white pixel from the stack and score it at the backdrop).

Before the match even started, we use vision to track the Team Prop location so that we can determine which Spike Mark to score our purple pixel and to score the yellow pixel at the appropriate slot on the backdrop. To ensure reliable and accurate autonomous, we also use vision to detect the AprilTags on the backdrop to re-localize our robot’s absolute location on the field. This paid off in our last competition where our robot hit our alliance partner’s robot which generally would be unrecoverable. But with 3-dead-wheel odometry and AprilTag re-localization, our robot used Pure Pursuit driving algorithm corrected itself and still scored the pixel on the backdrop accurately. We also added a distance sensor on our pixel tray so that it accurately approached the backdrop to the correct distance and scored the pixel without hitting the backdrop. In addition, we added a second distance sensor on the pixel tray to detect whether we have successfully picked up a pixel. This is especially useful in autonomous where we know for sure we have picked up white pixels from the stack so we can move on to score them. To avoid writing multiple OpModes for many permutations of autonomous (red or blue alliances, starting position at audience or backstage sides, park at backstage corner or center etc.), we wrote just one autonomous OpMode. With our menu system that prompted the driver to make choices on alliance color, starting position, parking position, scoring strategies: 2 or 2+1 pixels etc.), a single autonomous OpMode can handle all combinations by examining the auto choices. In addition, since the red and blue alliance paths are just mirror image of each other, we wrote our code just for the blue alliance and call the method adjustPoseByAlliance to translate path points for the red alliance that drastically reduces code complexity.

Sensors used

We use 11 sensors to maximize robot efficiency and reach 100% reliability.

  • Webcam (2) – Front for Team Prop/Pixels and back for AprilTag detection
  • Pixel Tray Distance Sensor (2) – one for detecting the proper distance to the backdrop and the other detects whether we have picked up a pixel
  • Odometry Wheels (3) - keep track of robot’s absolute field position (x, y) during auto and TeleOp
  • Motor Encoders (1) - detect position of elevator
  • Absolute Encoder (1) – detect the absolute position of the arm angle
  • IMU/Gyro (1) - measure robot heading
  • Limit Switches (1): set lower limit positions for elevator which is necessary since the elevator encoder is not absolute

Key Algorithms

AprilTag: using OpenCV to detect AprilTag on the backdrop as well as on the Audience wall for re-localization of the robot’s field location. Color Blob detection: using OpenCV to detect red or blue Team Prop. Pure Pursuit: autonomous path following. Homography: calculates real world locations of detected vision targets adjustPoseByAlliance: converts blue alliance path points for red alliance. Field Oriented Driving: allows the driver to use the field as the reference frame, robot drive forward no matter what heading it is pointing at. Gyro-Assist Driving: because of mechanical imperfection, robot may curve when driving straight, this helps maintaining the robot’s straight heading using gyro to redistribute power to different wheels. AutoChoice Menus: prompts driver for choosing autonomous options so that one auto OpMode can handle all permutations of the choices. This is one of the features of our library which is widely used by a lot of teams all over the world. Localization: integrating odometry to keep track of the robot’s absolute field location(x, y, heading). Stall Detection: algorithm to detect motor stall (i.e. power has been applied to the motor but it's not moving). Priority LED indicator: allow different subsystems to show their status using different LED color patterns without trampling on each other.

Driver Controlled Enhancements (Auto-Assist)

Our TeleOp provides Auto-Assist features that automate picking up pixels and scoring them on the backdrop. This eliminates human errors in driving the robot into the backdrop and de-scoring pixels or auto picking pixels where the driver may not have clear view of the pixels.

Auto-Assist Pickup: (Picking up a pixel in front)

  1. Use vision to find the closest pixel in front. Use Homography to calculate the pixel’s real world coordinate.
  2. Navigate the robot to the pixel using PurePursuit Drive.
  3. Turn on the intake to pickup the pixel and auto stop once the pixel is sensed in the pixel tray.

Auto-Assist Scoring: (Scoring a pixel onto the selected slot at the backdrop, this is used by both autonomous and TeleOp)

  1. Use vision to find the AprilTags on the backdrop.
  2. Use the AprilTag’s absolute field location to calculate the robot’s absolute field location and re-localize it.
  3. Raise the arm and elevator to the selected scoring height.
  4. Use PurePursuit Drive to approach to the selected slot on the backdrop.
  5. Score the pixel.

Smart Intake

The operator can put the intake into auto assist mode by pressing a button: intake will turn on and auto stops when detected a pixel in the pixel tray.

Priority LEDs Displaying Subsystem Status

During autonomous initialization, our LEDs change color depending on the Team Prop’s location detected by vision (1-violet, 2-green, 3-yellow). When in autonomous, the LED will flash cyan if vision detected AprilTag on the backdrop giving the driver confirmation that vision detected AprilTag. This In TeleOp, different subsystems display their status with different color patterns. To avoid subsystems fighting for the LEDs, each status color pattern is assigned a priority. For example, our LEDs flash white when Smart Claw is enabled but if the sensor detects the pole, the LEDs will flash yellow instead of white. Once the pole is out of sight, the LEDs will return to white.

Engineering notebook references

Page References
Feature Notebook Pages
TeleOp Auto-Assist 15
Pure Pursuit + Odometry 14
Homography 13
Vision 12

Autonomous program diagrams

File:FTC2023-2024 AutoPath.png

Software demos to show tomorrow - Auto-Assist scoring on the backdrop - Auto init with Team Prop - Auto-Assist intake


Old Stuff Our strategy: 1) Score “Auto Bonus” and “Parking”–easy & high point value tasks. 2) Score as many freight as possible in Alliance Shipping Hub top level–difficult but high point value.

We use over 20 sensors in combination with clever software, to maximize our robot’s efficiency and make our robot incredibly reliable in a wide variety of conditions. ● Webcam - The webcam takes an image of the Team Scoring Element and uses a custom-trained machine-learning model to identify the position of the Team Scoring Element during Autonomous. ● Odometers - 3 encoders positioned on the front, left, and right of our robot track the rotation of 3 omni wheels; using trigonometry we are able to accurately localize the robot. ● IMU - We use an Inertial Measurement Unit (IMU) for determining our robot’s exact heading/orientation to minimize heading drift. ● Color sensors - 2 color sensors are placed on the robot’s sides for detecting the field perimeter. This helps the robot to confirm if it is touching the wall and reset our x-position to remove any x-axis drift. ● Ultrasonic sensors - 2 ultrasonic sensors at the front of the robot are used to measure our exact y-position and remove any y-axis drift. ● Force Sensitive Resistors - 2 force sensors are placed in our intakes to identify if freight has been picked up, allowing for maximum efficiency when intaking freight. ● Motor Current Draw sensor - 1 motor current draw sensor measures the current draw of our intake motor to further confirm the force sensitive resistors values. If the robot sees a spike in current draw from the intake, we can safely assume that we have freight. ● Magnetic sensors - 2 Hall Effect magnetic sensors are placed on the hinge point of our intake to detect when the intake has successfully been raised and is ready to transfer the freight into our deposit. This prevents mis-transfers and maximizes transferring efficiency. ● Voltage sensors - 1 voltage sensor is used to measure the battery voltage and our software can automatically compensate for low or high voltage batteries. ● Ambient Light sensor - 1 Adafruit ALS-PT19 light sensor is in our deposit for confirming when freight has successfully been transferred out of our intake and into our deposit. This enables maximum efficiency when extending our linear slides to score freight. ● Motor encoders - 3 motor encoders are used on our turret and two linear slides motors. These encoders allow our robot to reliably tell how far our linear slides have been extended and what angle our turret has been turned to.


Automatically extend slides Automatically drop intake after deposit Fast Reset Position Team Scoring element quick placements Sprint & Slow Mode Drive to position (intaking and depositing) Automatic offsets on shared Automated duck spin sequence Automatic Alliance switch through file logs etc . . .

Season Videos

Turing League Event 2: Qualification Match 19

Lessons Learned

What Works Well?

What Needs Improvement?