Tuesday, March 22, 2022

Headtracking/Panning FPV on Durafly EFXtra

High performance FPV cameras on RC airplanes have come a long way. The ability to remotely pan and tilt onboard cameras have greatly improved the FPV flying experience. And, having this remote control via head tracking control results in a natural view experience. Formation flying, scenic views, and even air combat are far more realistic as a result.

Here's a formation flight video clip of an inexpensive Head Tracking system driving a panning DJI goggles v2Caddix Nebula, mounted on a very fast Durafly EFXtra:


Here's a photo of the airplane and the larger camera pod. This pod has room for flight controller, GPS.

Here's some closeups of the pod and camera assembly (note the 4.5g servo used here)



And here's a photo of the head tracker which slides nicely into the 'standard' analog mount on DJI headset:

All the plastic parts are 3d printed and I continue to iterate on the design (Nebula Nano camera, gear types, size, weight):


I'll iterate a little more and then upload the designs to GitHub, ThingVerse along with setup, assembly instructions.


Monday, September 20, 2021

Adding battery voltage telemetry to FrSky RX6R with Radiomaster TX16S

Now that Combat 2 (HB1) has had its maiden flight, cut a ribbon, and survived a mid-air, it is time to add voltage telemetry to move beyond timed flight.  It turns out our ZOHD based speed controllers make this pretty easy.

Items you will need:

  • ZOHD 30A ESC
  • FrSky RX6R receiver
  • 4.7k ohm resistor
  • 1.0k ohm resistor
  • The FrSky header connector (the one without the programming plug -- supplied with the receiver)
  • Heat shrink, extra wire, zip ties, etc.
Important, the resistor divider above is configured to get a decent sensitivity for 3 or 4 cell batteries (max voltage should not exceed 3.3v at the A2 input,  for this divider this means max 18.8v battery voltage). Is possible to use other resistor values for higher voltage battery packs.

Here we see the ZOHD ESC as previously installed. What is nice is that extra 2 wire connector (red/black) is directly connected to the battery terminals! This sure cleans up the wiring.


Next solder a resistor divider like this.  Note that the two pins facing down will directly plug into that extra 2 wire connector. Cut them straight and of a good length for inserting into the connector.


Now locate the header cable included with the receiver and cut all the wires except green (actually double check your color code and ensure you keep the wire that will connect to A2 sensor input as seen in the receiver pinout diagram).


Insert the resistor divider into the connector (Important: the 4.7k resistor MUST be on the RED side and the 1.0k should be on the black side.  Inserting this in reverse will likely damage the receiver's A2 sensor input circuits)

Now, using a bit of a jumper wire and heat-shrink, insert the connector into the receiver header, solder an extension wire so the length is about the same as the ESC control cable, insulate, heat shrink, and zip tie as necessary.


Now, if the TX16S is already bound and sensors discovered, the A2 voltage should show some value (here we see 7.7) when the airplane is powered up.

To calibrate to the actual voltage, enter model setup, scroll to the sensor tab. Note A2 is rendering here. If you need to discover sensors, this is the page to do it on.

Lastly, calibrate the voltage divider. Connect a flight battery and also connect a cell voltage monitor (here we see 11.06V - yes an empty battery). Scroll TX down to the A2 sensor and click enter. The page shown will display, scroll down to the Ratio field, click enter and adjust the ratio so the voltage displayed in the upper left of the display matches the battery voltage.  That's it! 

Next, you can further program the transmitter to emit voltage values, low voltage alarms. It might be interesting to apply a sag-compensated curve to the voltage so you don't have to exit the combat session too soon.

Friday, September 10, 2021

Configuring FrSky RX6R with Radiomaster TX16S transmitter

Here's a short post on the steps needed to use a very nice FrSky RX6R receiver with a Radiomaster TX16s transmitter.

  1. Ensure the transmitter is running compatible OpenTX firmware
  2. Upload the appropriate FrSky firmware to the Transmitter (I used ACCST_2.1.1_FCC). This should be uploaded to the 'FIRMWARE' subtree. Firmware should be available here: FrSky downloads
  3. Modify the wiring of the included SBus to servo-style connector to be yellow, black, red. This ensures correct wiring from the pins inside the Transmitter:
  4. Next create a new model in the transmitter (I use a dedicated one I call 'flash') which has the model set up for external RF in PPM mode. This will configures the pins in the external RF compartment to power the receiver in standalone mode:
  5. Now, connect the receiver to the external RF ports as shown (yellow on the 'bottom'). This will power up the receiver if wired correctly:
  6. Now, prepare for firmware update.  Enter 'Sys' mode, move right to the flash icon, and scroll down to select [FIRMWARE]:
  7. Enter this subtree and scroll down to the ACCST firmware (for US, the FCC version is required):
  8. Next select the firmware and see a pop-up that says "Flash External Module":
  9. At this point the radio lights should change to green and a progress bar of writing to flash should be displayed:
  10. Once the above completes, we are ready to bind the receiver. Select the receiver mode FRSKYX2 - D16
  11. Next prepare receiver for binding.  Here I set it up on an aircraft and power up the receiver while pressing the receiver button (actually when done correctly, the red light will be steady on):
  12. And lastly, select 'bind' and set for channels 1-8:
  13. The receiver led flips to blue if bind is complete.  At this point, power cycle the receiver and do a fine tune and range check. Add in hardware failsafe (set transmitter to failsafe positions, power up the receiver and press the button on the receiver within 10 seconds).  You should also set failsafe in the transmitter.
  14. Then you can discover the sensors and get receiver telemetry from the sensor sub-menu of the model.
  15. (TODO: describe how to prepare a voltage divider so you can get battery voltage telemetry too.)

Sunday, June 25, 2017

NextThingCo ChipPro kernel debugging with kgdb -- It can be done!

Lately, I've been working on an embedded platform IoT experiment. And for this I have selected ChipPro http://getchip.com platform. Is definitely an improvement over some of the other platforms and the $16 current price hard to beat. Some of the notable features include:

  • Decent ARM processor at 1ghz
  • 256mb of ram
  • 512mb of flash
  • Plenty of I/O
  • A nice power controller module
  • And a very nice dual channel 802.11 solution
  • A tidy buildroot platform more or less ready to extend
Anyway, I am building some IoT applications (with AWS Greengrass ) to explore continuous security patterns, over the air upgrade and mesh architectures.

As is the case with lower level hardware solutions, I'm extending a linux drivers and introducing problems along the way. Eventually needing to do low level startup debugging. Not much documentation on this so I figured I'd dig in and work it out.

Yes it can be done.  Summary:
  • Build the platform with debugging enabled
  • Load this image onto the target as usual
  • Modify boot args to start up and enter kdb
  • Then through the console, flip over to kgdb
  • And launch gdb-multiarch inside the buildroot VM

Steps to get kernel debugging enabled

This all assumes you are familiar with ChipPro buildroot builds as shown here: 

Configure the build to include kernel debugging

Add these entries to the linux-new.config (inside your clone of gadget-buildroot)
CONFIG_DEBUG_INFO=y
CONFIG_MAGIC_SYSRQ=y
# CONFIG_DEBUG_RODATA is not set
CONFIG_FRAME_POINTER=y
CONFIG_KGDB=y
CONFIG_KGDB_SERIAL_CONSOLE=y
CONFIG_KGDB_KDB=y
CONFIG_KDB_KEYBOARD=y

Rebuild the kernel from scratch

scripts/build-gadget make clean
scripts/build-gadget make xxx_defconfig
scripts/build-garget make -s

Load the newly built image onto system flash

(following instructions from gadget-buildroot)

Modify boot args to enter kdb on early boot

Now, assuming you have a serial port attached to your build system, you should be able to connect to the serial console to watch boot, etc. For me on a Mac this is:
screen /dev/tty.usbserial 115200
Here is where you can interrupt boot sequence at the firmware load, modify boot environments, etc. You can either interrupt the boot sequence (e.g. if your code updates panic the kernel early). or you can use fw_printenv, fw_setenv which are commands in the target's image.

The environment parameters (fw_printenv) should show one entry like this:
bootargs=root=ubi0:rootfs rootfstype=ubifs rw ubi.mtd=4 lpj=5009408 ubi.fm_autoconvert=1 quiet
Now, modify this entry (using fw_setenv) to enter kdb, using the serial console on boot:
root=ubi0:rootfs rootfstype=ubifs rw ubi.mtd=4 lpj=5009408 ubi.fm_autoconvert=1 kgdboc=ttyS0,115200 kgdbwait
At this point if all is well, when you reset the device it should enter kdb.

Use gdb to debug the kernel

Several things need to happen here. You need to get the serial console access inside the docker container that does the build. Depending on the VM you are using, the steps to get the USB serial port visible in the buildroot guest vary. Since I'm using VirtualBox on a Mac, I wound up enabling USB on the guest and restarting it. Native docker support will be a bit simpler.

Anyway, once the port is available to the guest, you need to alter scripts/build-gadget to include the device. Something like this on the docker run command in the build-gadget script (the --device line):
docker run \
--rm \
--env BR2_DL_DIR=/opt/dlcache/ \
--volume=gadget-build-dlcache:/opt/dlcache/ \
--volume=gadget-build-output:/opt/output \
--volume=${IMAGE_DIR}:/opt/output/images \
--volume=${GADGET_DIR}:/opt/gadget-os-proto \
--name=gadget-build-task \
-w="/opt/gadget-os-proto" \
--device=/dev/ttyUSB0 \
Now, you will need to enter the guest interactively. The best way to do this is (from the gadget-buildroot top level directory)
TERM=yes ./scripts/build-gadget bash
Now you should install screen and gdb-multiarch (TODO this should be part of the build-container)
apt update
apt-get install screen gdb-multiarch 
 Now, connect to the serial console from inside the guest
screen /dev/ttyUSB0 115200
And boot up the device to kdb.  And then flip the console over to kgdb (by entering 'kgdb' at the kdb prompt). At this point you need terminate screen (terminate, not detach -- otherwise complaints about port in use).

Now debug away:

In the guest, change over to the linux directory of the build.  This is something like /opt/output/build/linux-...  And then start up gdb in cross platform mode:
gdb-multiarch 
set arch arm
file vmlinux
target remote /dev/ttyUSB0 
That should do it, you are in an early boot intercept using a remote debugger from inside a guest OS that has all the necessary source!

Happy debugging!

Tuesday, May 31, 2016

Efficient Apple Watch CMSensorRecorder transport to AWS

Well, I have come full circle. A long time ago, this experiment ran through Rube Goldberg system #1:

  • Dequeue from CMSensorRecorder
  • Pivot the data
  • Send it via WCSession to the iPhone
  • iPhone picked up the data and queued it locally for Kinesis
  • Then the Kinesis client transported data to Kinesis
  • Which had a Lambda configured to dequeue the data
  • And write it to DynamoDB
Then, I flipped the data around in an attempt to have the Watch write directly to DynamoDB:
  • Dequeue from CMSensorRecorder
  • Pivot the data to a BatchPutItem request for DynamoDB
  • Put the data to DynamoDB
  • (along the way run the access key Rube Goldberg machine mentioned earlier)
The problems with both of these approaches are the cost to execute on the Watch and the Watch's lack of background processing. This meant it was virtually impossible to get data dequeued before the Watch app went to sleep.

I did a little benchmarking over the weekend and found that brute force dequeue from CMSensorRecorder is fairly quick. And the WCSession sendFile support can run in the background, more or less. So, I will now attempt an alternate approach:
  • Dequeue from CMSensorRecorder
  • Minimal pivot of data including perhaps raw binary to a local file
  • WCSession:sendFile to send the file to the iPhone
  • Then iPhone gets the file and sends it itself to AWS (perhaps a little pivot, perhaps S3 instead of DynamoDB, etc.)
  • (along the way a much simpler access key machine will be needed)
The theory is that this'll get the data out of the Watch quickly during its limited active window.

We'll see...

Saturday, May 28, 2016

The limits of AWS Cognito

Well, after a slight hiatus, I spent a little time understanding how to use AWS Cognito in an application. I've now got a more or less running Cognito-as-STS-token-generator for the Apple Watch. Features:

  • Wired to any or all of Amazon, Google, Twitter or Facebook identity providers
  • Cognito processing occurs on the iPhone (hands STS tokens to Watch as Watch can't yet run the AWS SDK)
  • Leverage Cognito's ability to 'merge' identities producing a single CognitoID from multiple identity providers
  • Automatic refresh of identity access tokens
Here's the iPhone display showing all the identity providers wired:

Ok, the good stuff. Here what the access key flow now looks like:


There are a lot of actors in this play. They key actor for this article is the IdP. Here, as a slight generalization across all the IdPs, we have a token exchange system. The iPhone maintains the long lived IdP session key from the user's last login.  Then, the iPhone performs has the IdP exchange the session key for a short-lived access key to present to Cognito. For IdP like Amazon and Google, the access key is only good for an hour and must be refreshed...

Let me say that again; today, we need to manually refresh this token for Cognito before asking Cognito for an updated STS token! Cognito can't do this! FYA read Amazon's description here: "Refreshing Credentials from Identity Service" 

Especially in our case, where our credentials provider (Cognito) is merely referenced by the other AWS resources, we need to intercept the Cognito call to make sure that on the other side of Cognito, the 'logins' are up to date.

So, I replumbed the code to do just this (the 'opt' section in the above diagram). Now, a user can log in once on the iPhone application and then each time the Watch needs a token, the whole flow tests whether or not an accessKey needs to be regenerated.

For reference, here's the known lifetimes of the various tokens and keys:
  • The Watch knows its Cognito generated STS Token is good for an hour
  • Amazon accessTokens are good for an hour (implied expire time)
  • Google accessToken is good until an expire time (google actually returns a time!)
  • Twitter doesn't have expire so its accessKey is unlimited
  • Facebook's token is good for a long time (actually the timeout is 60 days)
  • TODO: do any of the IdPs enforce idle timeouts? (e.g. a sessionKey has to be exchanged within a certain time or it is invalidated...)
So, with all these constants, and a little lead time, the Watch->iPhone->DynamoDB flow looks pretty robust. The current implementation is still limited to having the Watch ask the iPhone for the STS since I haven't figured out how to get the various SDKs working in the Watch. I don't want to rewrite all the IdP fetch codes, along with manual calls to Cognito.

Plus, I'm likely to move the AWS writes back to the iPhone as the Watch is pretty slow.

The code for this release is here. The operating code is also in TestFlight (let me know if you want to try)

Known bugs:
  • Google Signin may not work when the app is launched from the Watch (app crashes)
  • Facebook login/logout doesn't update the iPhone status section
  • The getSTS in Watch is meant to be pure async -- I've turned this off until its logic is a bit more covering of various edge cases.
  • The webapp should also support all 4 IdP (only Amazon at the moment)




Tuesday, February 23, 2016

Wow: Multiple Identity Providers and AWS Cognito

I've finally found time to experiment with multiple identity providers for Cognito. Mostly to understand how a CognitoId is formed, merged, invalidated. It turns out this is a significant finding, especially when this Id is used, say, as a primary key for data storage!

Recall, the original sensor and sensor2 projects were plumbed with Login With Amazon as the identity provider to Cognito. This new experiment adds GooglePlus as a second provider. Here you can see the test platform on the iPhone:

Keep in mind that for this sensor2 application, the returned CognitoId is used as the customer's key into the storage databases. Both for access control and as the DynamoDB hash key.

The flow on the iPhone goes roughly as follows:
  • A user can login via one or both of the providers
  • A user can logout
  • A user can also login using same credentials on a different devices (e.g. another iPhone with the application loaded)
Now here's the interesting part. Depending on the login ordering, the CognitoId returned to the application (on the watch in this case) can change! Here's how it goes with my test application (which includes "Logins" merge)
  • Starting from scratch on a device
  • Login via Amazon where user's Amazon identity isn't known to this Cognito pool:
    • User will get a new CognitoId allocated
  • If user logs out and logs back in via Amazon, the same Id will be returned
  • If the user now logs into a second device via Amazon, the same Id will be returned
  • (so far this makes complete sense)
  • Now, if the user logs out and logs in via Google, a new Id will be returned
  • Again, if the user logs out and in again and same on second device, the new Id will continue to be returned
  • (this all makes sense)
  • At this point, the system thinks these are two users and those two CognitoIds will be used as different primary keys into the sensor database...
  • Now, if the user logs in via Amazon and also logs in via Google, a CognitoId merge will occur
    • One, or the other of those existing Ids from above will be returned
    • And, the other Id will be marked via Cognito as disabled
    • This is a merge of the identities
    • And this new merge will be returned on other devices from now on, regardless of whether they log in solely via Amazon or Google
    • (TODO: what happens if user is logged into Amazon, has a merged CognitoId and then they log in using a second Google credential?)
This is all interesting and sort of makes sense -- if a Cognito context has a map of logins that have been associated, then Cognito will do the right thing. This means that some key factors have to be considered when building an app like this:
  • As with my application, if the sensor database is keyed by the CognitoId, then there will be issues of accessing the data indexed by the disabled CognitoId after a merge
  • TODO: will this happen with multiple devices going through an anonymous -> identified flow?
  • It may be that additional resolution is needed to help with the merge -- e.g. if there is a merge, then ask the user to force a join -- and then externally keep track of the merged Ids as a set of Ids -> primary keys for this user...
Anyway, I'm adding in a couple more providers to make this more of a ridiculous effort. After which I'll think about resolution strategies.