Thursday, November 19, 2015

CMSensorRecorder from Watch to AWS

I've done a couple of experiments with the direct networking from the WatchOS 2.0 using NSURLSession. Based on the results, I now have a strategy for getting the iPhone out of the loop.

Recall the current plumbing looks like this:

  • CMSensorRecorder data collected in Watch
  • Dequeued by a native application on the Watch
  • Sent via WCSession to iPhone
  • iPhone uses AWS KinesisRecorder to buffer received events
  • KinesisRecorder sends to AWS (using Cognito credentials)
  • Kinesis is dequeued to an AWS Lambda
  • The Lambda stores the records to DynamoDB (using a fixed IAM role)
This is fine, but has a lot of moving parts. And more importantly, Kinesis is provisioned at expected throughput and you pay by the hour for this capacity.

My next experiment will look like this:
  • CMSensorRecorder data collected in Watch
  • Dequeued by a native application on the Watch
  • Sent directly to AWS IoT (using on-watch local long lived IAM credentials)
  • An IoT rule will send these events to Lambda
  • The Lambda stores the records to DynamoDB (using a fixed IAM role)
The important difference here is that IoT is charged based on actual use, not provisioned capacity. This means the effective costs are more directly related to use and not expected use!

Also, the Watch will be able to directly transmit to AWS instead of going through the iPhone (well, it does proxy through its paired phone if in range, otherwise it goes direct via WIFI if a known network is near). This feature will be an interesting proof of concept indeed.

Anyway, for this first experiment the writes to IoT are via http POST (not via MQTT protocol). And authentication is via long lived credentials loaded into the Watch, not Cognito. This is primarily because I haven't yet figured out how to get the AWS IoS SDK to work on the Watch (and the iPhone at the same time). So, I am instead using low-level AWS Api calls. I expect this will change with a future release of the SDK.

I have toyed with the idea of having the Watch write directly to DynamoDB or Lambda instead of going through IoT. Since the Watch is already buffering the sensor data, I don't really need yet another reliable queue in place. Tradeoffs:
  1. Send via IoT
    • + Get Kinesis-like queue for reliable transfer
    • + Get some runtime on IoT and its other capabilities
    • - Paying for another buffer in the middle
  2. Send direct to Lambda
    • + One less moving part
    • - To ensure sending data, need to make a Synchronous call to Lambda which can be delayed when writing to DynamoDB, not sure how well this will work on networks in the field
  3. Send direct to DynamoDB
    • + The lowest cost and least moving parts (and lowest latency)
    • - DynamoDB batch writes can only handle 25 items (0.5 seconds) of data
Note: on DynamoDB, earlier I had discussed a slightly denormalized data storage scheme. One where each second of data is recorded on one dynamoDB row (with separately named columns per sub-second event). Since DynamoDB can do no-clobber updates, this is a nice tradeoff of rows vs data width. This would change the data model and the reader would need to take this into account, but this may make the most sense no matter what. Basically doing this gets better utilization of a DynamoDB 'row' by compacting the data as much as possible. This probably reduces the overall cost of using DynamoDB too as the provisioning would, in general, be reduced. So, I may just re-do the data model and go direct to DynamoDB for this next POC.

Stay tuned!

Thursday, October 29, 2015

Sensor: You Can Try Out Some Real Data

I've set up a rendering of some actual sensor data in a couple of formats:

  • A line chart with X as time and Y as 3 lines of x, y, z acceleration
  • A 3d plot of x, y, z acceleration with color being the sample time
Is interesting to see the actual sensor fidelity in a visual form. CMSensorRecorder records at 50 samples per second and the visualizations are 400 samples or 8 seconds of data.

You can try out the sample here at http://test.accelero.com There are a couple of suggested start times shown on the page.  Enter a time and hit the Fetch button. Recall this fetch button allows the browser to directly query DynamoDB for the sample results. In this case anonymously and hard coded to this particular user's Cognito Id...


Once the results are shown you should be able to drag around on the 3d plot to see the acceleration over time.

The above timeslice is a short sample where the watch starts flat and is rotated 90 degrees in a few steps. If you try out the second sample you will see a recording of a more circular motion of the watch.

Note that d3.js is used for the line charts and vis.js is used for the interactive 3d plot.

Sunday, October 25, 2015

Apple Watch Accelerometer displayed!

There you have it! A journey started in June has finally rendered the results intended. Accelerometer data from the Watch is processed through a pile of AWS services to a dynamic web page.

Here we see the very first rendering of a four second interval where the watch is rotated around its axis. X, Y and Z axes are red, green, blue respectively. Sample rate is 50/second.

The accelerometer data itself is mildly interesting. Rendering it on the Watch or the iPhone were trivial exercises. The framework in place is what makes this fun:
  • Ramping up on WatchOS 2.0 while it was being developed
  • Same with Swift 2.0
  • Getting data out of the Watch
  • The AWS iOS and Javascript SDKs
  • Cognito federated identity for both the iPhone app and the display web page
  • A server-less data pipeline using Kinesis, Lambda and DynamoDB
  • A single-page static content web app with direct access to DynamoDB
No web servers, just a configuration exercise using AWS Paas resources. This app will likely be near 100% uptime, primarily charged per use, will scale with little intervention, is logged, AND is a security first design.

Code for this checkpoint is here.

Friday, October 23, 2015

Amazon's iOS SDK KinesisRecorder: bug found!

Recall earlier posts discussing 50% extra Lambda->DynamoDB event storage. It turns out the problem is the AWS SDK KinesisRecorder running in the iPhone. Unlike the sample code provided, I actually have concurrent saveRecord() and submitAllRecords() flows -- sort of like real world. And this concurrency exposed a problem in the way KinesisRecorder selects data for submit to Kinesis.

Root Cause: rowid is not a stable handle for selecting and removing records.

Anyway, I made a few changes to KinesisRecorder:submitAllRecords(). These changes are mostly to index records by their partition_key. This seems to work ok for me. However, it may not scale for cases where the KinesisRecorder winds up managing a larger number of rows. This needs some benchmarking.

Pull request is here.  And here's updated iPhone code to do the right thing.

As they say "now we're cookin' with gas!"

Here we see the actual storage rate is around the expected 50 per second. The error and retry rates are minimal.

Sooo, back to now analyzing the data that is actually stored in DynamoDB!

Sunday, October 18, 2015

AWS Lambda: You can't improve what you don't measure

Now that there is a somewhat reliable pipeline of data from the Watch-iPhone out to AWS, I have a chance to measure the actual throughput through AWS. Interesting results indeed.

As of this moment, Lambda is performing 50% more work than is needed.

Here's a plot of DynamoDB write activity:

Here we see that during a big catchup phase when a backlog of events are being sent at a high rate, the writes are globally limited to 100/second. This is good and expected. However, the last part is telling. Here we have caught up and only 50 events/second are being sent through the system. But DynamoDB is showing 75 write/second!

Here's a filtered log entry from one of the Lambda logs:

Indeed, individual batches are being retried. See job 636 for example. Sometimes on the same Lambda 'instance'. Sometimes on a different instance. This seems to indicate some sort of visibility timeout issue (assuming the Lambda queuer even has this concept).

Recall, the Watch is creating 50 accelerometer samples per second through CMSensorRecorder. And the code on the Watch-iPhone goes through various buffers and queues and winds up sending batches to Kinesis. Then the Kinesis->Lambda connector buffers and batches this data for processing by Lambda. This sort of a pipeline will always have tradeoffs between latency, efficiency and reliability. My goal is to identify some top level rules of thumb for future designs. Again my baseline settings:
  • Watch creates 50 samples/second
  • Watch dequeuer will dequeue up to 200 samples per batch
  • These batches are queued on the iPhone and flushed to Kinesis every 30 seconds
  • Kinesis->Lambda will create jobs of up to 5 of these batches
  • Lambda will take these jobs and store them to DynamoDB (in batches of 25)
There are some interesting observations:
  • These jobs contain up to 1000 samples and take a little over 3 seconds to process
  • The latency pipeline from actual event to DynamoDB store should be:
    • Around 2-3 minute delay in CMSensorRecorder
    • Transmit from Watch to iPhone is on the order of 0.5 seconds
    • iPhone buffering for Kinesis is 0-30 seconds
    • Kinesis batching will be 0-150 seconds (batches of 5)
    • Execution time in the Lambda of around 3 seconds
  • Some basic analysis of the Lambda flow shows items getting re-dispatched often!
This last part is interesting. In digging through the Lambda documentation and postings from other folks, there is very little definition of the retry contract. Documents say things like "if the lambda throws an exception.." or "items will retry until they get processed". Curiously, nothing spells out what constitutes processed, when it decides to retry, etc.

Unlike SQS, exactly what is the visibility timeout? How does Lambda decide to start up an additional worker? Will it ever progress past a corrupted record?

This may be a good time to replumb the pipeline back to my traditional 'archive first' approach: S3->SQS->worker. This has defined retry mechanisms, works at very high throughput, and looks to be a bit cheaper anyway!


Thursday, October 15, 2015

Apple Watch Accelerometer -> iPhone -> Kinesis -> Lambda -> DynamoDB

I've been cleaning up the code flow for more and more of the edge cases. Now, batches sent to Kinesis include Cognito Id and additional instrumentation. This will help when it comes time to troubleshoot data duplication, dropouts, etc. in the analytics stream.

For this next pass, the Lambda function records the data in DynamoDB -- including duplicates. The data looks like this:



The Lambda function (here in source) deserializes the event batch and iterates through each record, one DynamoDB put at a time. Effective throughput is around 40 puts/second (on a table provisioned at 75/sec).

Here's an example run from the Lambda logs (comparing batch size 10 and batch size 1):

START RequestId: d0e5b23a-54f1-4be8-b100-3a4eaabfbced Version: $LATEST
2015-10-16T04:10:46.409Z d0e5b23a-54f1-4be8-b100-3a4eaabfbced Records: 10 pass: 2000 fail: 0
END RequestId: d0e5b23a-54f1-4be8-b100-3a4eaabfbced
REPORT RequestId: d0e5b23a-54f1-4be8-b100-3a4eaabfbced Duration: 51795.09 ms Billed Duration: 51800 ms Memory Size: 128 MB Max Memory Used: 67 MB
START RequestId: 6f430920-1789-43e1-a3b9-21aa8f79218e Version: $LATEST
2015-10-16T04:13:22.468Z 6f430920-1789-43e1-a3b9-21aa8f79218e Records: 1 pass: 200 fail: 0
END RequestId: 6f430920-1789-43e1-a3b9-21aa8f79218e
REPORT RequestId: 6f430920-1789-43e1-a3b9-21aa8f79218e Duration: 5524.53 ms Billed Duration: 5600 ms Memory Size: 128 MB Max Memory Used: 67 MB

Recall, the current system configuration is:
  • 50 events/second are created by the Watch Sensor Recorder
  • These events are dequeued in the Watch into batches of 200 items
  • These batches are sent to the iPhone on the fly
  • The iPhone queues these batches in the onboard Kinesis recorder
  • This recorder flushes to Amazon every 30 seconds
  • Lambda will pick up these flushes in batches (presently a batch size of 1)
  • These batches will be written to DynamoDB [async.queue concurrency = 8]
The Lambda batch size of 1 is an interesting tradeoff.  This results in the lowest latency processing. The cost appears to be around 10% more work (mostly a lot more startup/dispatch cycles).

Regardless, this pattern needs to write to DB faster than the event creation rate...

Next steps to try:
  • Try dynamo.batchWriteItem -- this may help, but will be more overhead to deal with failed items and provisioning exceptions
  • Consider batching multiple sensor events into a single row. The idea here is to group all 50 events in a particular second into the same row. This will only show improvement if the actual length of an event record is a significant fraction of 1kb record size
  • Shrink the size of an event to the bare minimum
  • Consider using Avro for the storage scheme
  • AWS IoT
Other tasks in the queue:
  • Examine the actual data sent to DynamoDB -- what is are the actual latency results?
  • Any data gaps or duplication?
  • How does the real accelerometer data look?
  • (graph the data in a 'serverless' app)

Sunday, September 27, 2015

Cognito based credentials finally refreshing

It turns out I had it wrong all along. See here's the flow:
  • Cognito is mapping an identity to a STS based role
  • We need to ask Cognito to refresh the credentials directly (not just the provider refresh)
Now, there is some debate as to whether this part of the SDK is obeying the refresh contract. So, for now I have this construct in the 'flush to kinesis' flow:
    if (self.credentialsProvider.expiration == nil |  self.credentialsProvider.expiration.timeIntervalSinceNow < AppDelegate.CREDENTIAL_REFRESH_WINDOW_SEC) {
            let delegate = AuthorizeUserDelegate(parentController: self.viewController)
            delegate.launchGetAccessToken()
            NSLog("refreshd Cognito credentials")
  }
This winds up trigger the usual Cognito flow. And if a persistent identity is in the app, then this finally does the right thing. Simulator based transmit now does token refresh reliably over many hours, or many versions of STS tokens.

Also, this version of the code is updated based on the release versions of Xcode 7, iOS 9 and watchOS 2. Everything is running fairly smoothy. There are still a couple of areas I'm investigating:

  • The WCSession:sendMessage seems to get wedged in a certain sequence. Watch sends a message, is waiting for a reply, phone gets message, then watch goes to sleep. The phone has processed the message and is blocked on the reply to the watch. This doesn't seem to get unwedged any way other than waiting for a 2 or 5 minute timeout.
  • This particular code does get into an initial block state if the phone is locked. This looks to be something where the accelerometer sensor needs to check with the phone to see if user has granted access to sensor.
Both of the above are a bit more than minor inconveniences. The first means that even IF living with the watch app going to sleep often, you still can't reliably transfer a bunch of data to the phone using the sendMessage method. The second means it is not clean for starting the app on the watch when the phone is locked or out of range. Maybe there is a reason. But really, we are at a point where getting the sensor data out of the watch for anywhere close to near-realtime processing isn't yet realized.


Sunday, September 13, 2015

Sensor: running well on iOS 9 Seed and WatchOS 2

I've made a checkpoint of the sensor code that corresponds to the iOS9 GM seed and WatchOS 2.0. The release tag is here. Note, this code is configured to generate synthetic data, even on the hardware. I'm using this to prove the robustness of the Watch -> iPhone -> AWS connections across noisy connections.

I've cleaned up the transport a bit to send JSON directly from the Watch. This goes across the WCSession to the iPhone.  The iPhone does parse the data to examine it and update it's engineering display. But, really this raw JSON payload is sent directly to Kinesis.

Here's a screen dump of a AWS Lambda parsing the Kinesis flow. This Lambda simple prints the JSON, enough to show what is being sent:



This code runs pretty well in background mode on the iPhone. The data flow continues even while the phone is locked, or working on another application. This key concept shows iPhone as a buffered proxy to AWS.

Next up, handling a few error cases a bit better:
  • When the watch goes in and out of range
  • When the phone goes in and out of being able to reach AWS
  • And of course, when the watch goes to sleep (real goal is to keep being able to dequeue from CMSensorRecorder while watch is asleep)

Sunday, August 30, 2015

Account Break-in and the eMail Weak Link

Last week, a relative's bank account was drained. In this case a person presented some checks, in person, to a teller at the bank. The teller performed a fraud check by calling the telephone number on record for the account. The person who answered the telephone confirmed the checks were valid. Then the bank cashed the checks. Disturbing? Commonplace!

I spend a lot of time working on computer security for my day job. We spend great amounts of time on threat analysis, process control, fault detection. The world isn't very far along with respect to full protection.

So, let's dig into a particular sequence of events that likely happened here:
  • Someone was able to gain access to the eMail account for the relative
  • Next, they examined the eMail history (e.g for bank account #)
  • Next, they requested a password reset for the bank account (sent to the email)
  • Now, with access to the bank account, they:
    • Examined signature on some of the online facsimiles of recently cashed checks
    • Altered the contact telephone number for the bank account
    • Deleted any account-changed evidence that was emailed to the relative
  • Next, they prepared a counter check (readily available) and went to the bank
  • The banker called the fake number and approved the transaction
Yes, the bank will accept liability and will (eventually) return the money. However, this scenario isn't quite like credit-card fraud. Here is a case where your real bank account has no money, you are stuck until the bank restores the funds (after their additional check to make sure you aren't part of the ring!)

Mobile phone eMail Weak Link

The break-in got me to re-visiting my assumptions about eMail and if there is any way to improve eMail password recovery plumbing. How good are eMail provider's MFA solutions? How big is the risk when MFA tokens and email might be on the same phone? How easy is it to social engineer a password recovery? Two cases in point:

Yahoo eMail

  • Has a multi-factor solution that binds a special code to a particular system
  • However, allows for a single password that binds to a mobile device
  • Allows for SMS or alternate eMail address for password recovery
The problem here is that if a phone is stolen, both the SMS target AND the email address are on the same system (always have your phone go to lock mode). Also, if the password recovery email is compromised, then it is likely possible to recover a password for Yahoo (this is exceptionally easy these days as emails correlation databases are readily available).

So, one solution is to not have an SMS recovery for Yahoo and then to have the recovery email to another service (say gmail).

Note, even though SMS target can be removed for Yahoo, they really want an SMS target, they will re-ask for one during recovery even if you only want to use recovery email. So, be careful letting an SMS recovery leak back in to your configuration.

Google eMail

  • Has a time based token option for 2nd factor
  • This does NOT work when using an embedded email system (like on a phone)
  • Has a fancy password recovery mechanism that may be a little too fancy
  • Seems to ignore registered SMS as recovery option
Even if you remove the email recovery email, and only have SMS, google sort of forces you through the social password recovery scheme. I was able to get a recovery password sent to an old email address. I guess this isn't really a scientific test as I didn't try this from different IP addresses, etc. Still this seems a little to fancy (read: tries to reduce the number of support calls google gets...)

Also, here again, if both Yahoo and gmail are on the same phone, it is possible to recover both (again, hope your phone's lock system is robust)

What Next?

I'll discuss malware proxy risks in a future posting. In the mean time, need to investigate which financial institutions allow for:
  • Requiring token based MFA
  • Have non-email based password recovery schemes
  • Optionally, restrict remote financial transactions to non-internet
  • Or, allow for voice confirmed shared secrets ("what is the answer to question 2?")
Yes, the convenience of just typing in a password should be a thing of the past. But, even an MFA solution, or alternate channel password recovery doesn't seem to close all the gaps.





CMSensorRecorder data reliably flowing to Kinesis

I've refactored the iPhone side of the code a bit to better represent background processing of the sensor data. The good thing is that WCSession:sendMessage handles iPhone background processing properly. This is at the expense of having to handle reachability errors in code. The checkpoint of code is here.

Now the flow is roughly:
  • On Watch
    • CMSensorRecorder is activated and is recording records locally regardless of the application state of the Watch
    • When the sensor application is in the foreground, a thread attempts to dequeue data form the recorder
    • And when this data is received a WCSession:sendMessage is used to send the data to the iPhone in the background
    • Iff a valid reply comes back from this message, the CMSensorRecorder fetch position is updated to fetch the next unprocessed sensor data
  • On iPhone
    • A background thread is always ready to receive messages from the Watch
    • Those messages are saved to a local Kinesis queue
    • A timer based flush will submit this Kinesis queue to AWS
    • AWS credentials from Cognito are now manually refreshed by checking the credentials expire time
    • The send and submit kinesis calls are now asynchronous tasks
So this is pretty close to continuous feed on the iPhone side.

Some areas of durability to re-explore next:
  • How to build a Watch dequeue that can run when the application isn't in foreground?
  • Is there another way for WCSession to send to a background task other than sendMessage?
  • How reliable is the sendMessage call?
    • When the iPhone is out of range
    • When the iPhone is locked
    • When it is busy running another application
    • I do see some transient 'not paired' exceptions when sending volume
  • While this does allow for automatic background processing, is there a simpler way of transferring data that doesn't require the application handling reachability errors?
  • How reliable is the Kinesis send-retry when the iPhone can't reach AWS?
I will next be building more quantitative checks of the actual data sent through the system to understand where data get sent more than once, or where it is lost.

Wednesday, August 26, 2015

Xcode 7 beta6: Bitcode issues between WatchOS and iPhone solved!

Getting there!  A quick upgrade to Xcode 7 beta6 fixed this issue.  We now have data transfer from Watch Accelerometer to CMSensorRecorder to Watch app to iPhone to Kinesis -- yes data is flowing, mostly resiliently too; with intermittent focus, connectivity, etc.  Here is the code.

And some screen dumps (explained below):


The Watch screen dump is pretty much as before.  You will see the Cognito integration (using Amazon as an identity provider).  The first 4 lines are the identity details.  The next 4 lines are information regarding the Kinesis storage, the STS token expire time, the amount of local storage consumed, how many flushes to Kinesis have occurred, and when.

Of course the current code still relies on these transfer operations being in focus, an ongoing area of research as to how to make this a background operation on both the Watch and on the iPhone.  But still, real data is finally in Kinesis.

TODO: build an auto-refreshing STS token, as this appears to be a known problem.

Next up, write an AWS Lambda function to read from Kinesis, parse the records and then put them into DynamoDB.  Once that is there, a visualization example both on iOS and on a Server-less web service...


Sunday, August 23, 2015

Marketing: make something look like what is intended, not what it is

Well, this has been a depressing past couple of days. This was the time to re-integrate the AWS SDK back into the application in preparation for sending data to Kinesis. I had basic Cognito and Kinesis hello world working back in June on WatchOS 1.0. I'd mistakenly assumed that some sort of compatibility over time would be in order. Not to be the case. Summary:
Yes, it is possible to disable the enforcement of the TLS1.2 requirement. And this I did, I am able to get a set of temporary keys for calls to AWS services. How many applications are going to have to do this? All of them?

Worse, it doesn't look possible to use the current AWS SDK with a Watch application. This looks like a pretty ugly show stopper:
  • 3rd party library doesn't have bitcode support. While you can disable this on the iPhone,
  • Watch and iPhone have to have the same bitcode support level. And the Watch requires bitcode enabled.
Think about what this means!  7-8 years worth of iPhone 3rd party library support out there and probably used by a few applications. And these libraries will NOT work with any application that wants to bundle with WatchOS 2.0. The proverbial 'what were they thinking?' comes to mind.

So, I'm stuck: can't integrate with 3rd party library until rebuilt...

The calendar looks rough for Apple:
  • September 9th announcements; WatchOS2 and some new hardware
  • Then they turn on the holiday season marketing; "buy our watch, it has apps"
  • In the mean time, a mountain of developers are trying to figure out how to ship anything on the Watch
  • New message "trust us, our developers will eventually catch up"


Tuesday, August 18, 2015

WatchOS 2.0 beta5: development productivity increases!

I've been trying to characterize a beta5 issue where the CMSensorRecorder system doesn't reliably dequeue. There is some combination of being on the charger and then off and then starting a new debug session where the thing just works. For now, juggling with the Watch on and off the charger along with the occasional restart of debugger and I've managed to get 10 or more debug sessions. Ridiculous productivity tonight! The result is some progress on understanding how the system works.

As of now, the sample program:
  • Can kick off the CMSensorRecorder for a defined number of minutes
  • When the Watch application is in foreground, it will dequeue whatever records are found and then send them to the iPhone
  • The iPhone will dequeue the records and test them for consistency by looking for gaps in the sensor record date stamps.
Here is what the displays look like now:


Above, you will see the new gapError count. This is a consistency of data check; a quick check to ensure that all data is sequential and updating at 50Hz (the count of 1 is expected; the first element since program launch).

Updated code is here.

Next up is to plumb AWS back into the rig to send this data to Kinesis...

Monday, August 17, 2015

A visit to the Computer History Museum

I spent an afternoon at the Computer History Museum the other day; a place I haven't visited since their remodel. The presentation is definitely an improved exhibit style tour. Also, a much larger collection with improved descriptions of the various artifacts.

I have to admit, I probably should have worn suspenders and a scruffy beard. That I knew the details of, and in fact worked with, so many of the artifacts in the building was a little scary. Yes, get off of my lawn...

This visit did remind me of another project I have in the backlog:
This is a Polaroid photograph of my desk, circa 1976. A picture of my first digital computer. A hand made Intel 8008 based computer.

This machine (video game) connected to a television for display and had various input and output devices:

  • cassette tape
  • joysticks
  • address and data switches
  • address and data lights
  • and video output
It turns out I have a bundle of design and code papers for this machine (e.g. the blueprint on the world map at the back of the photo). And a few years ago, from those drawings and machine code, I re-animated the programs in a cycle-accurate machine emulator. Actually, a more accurate description would be to reverse-engineer some partial drawings and code to try to figure out what was intended and then to make an emulation. It was quite an experience to see the programs alive again.

I did the original emulation as a java/swing application. It is now time to port this over to javascript so anyone can run it in their browsers.

I'll scan a few of the drawings and post an update with the details and architecture of this machine. Makes for a good experiment at least.


Sunday, August 16, 2015

WatchOS beta5: CMSensorRecorder working again (almost)

The Journey of Instability is alive and well.

I have been working the last 4 days to get around this error when attempting to dequeue CMSensorRecorder messages (filed as bug 22300167):
Aug 16 18:23:43 Gregs-AppleWatch sensor WatchKit Extension[174] <Error>: CoreLocation: Error occurred while trying to retrieve accelerometer records!
During a couple dozen debug attempts I was able to see data flowing only once.

Again, it is fun to be on the pre-release train with Apple. Yes, they work hard to improve features at the expense of stability. The balance between getting things working and presenting a stable interface is a tough one to manage. I know this looks better inside the company, where folks can know in advance to not use experimental interfaces. From the outside, the every-two-week, do it a different approach way is tiring. This will slow the adoption of novel features I'm sure.

Here's what the workflow was like to get to this point:
  • beta5 upgrade of XCode, iOS9, WatchOS2 (~ 5 hours)
  • discover that the metadata in the existing sensor project is somehow incompatible
  • manually copy over the code to a newly generated project, now simulator and hardware are working (~ 8 hours)
  • port the sensor recorder using the new protocol (a mere 20 minutes)
  • after numerous sessions to real hardware (simulator doesn't support accelerometer) realize that I don't know how to work around the above bug (~ 8 hours)
As many of you have lived; the repeated reboots, hard starts, 'disconnect and reconnect', yet another restart because the iPhone screen is wedged, restart debug session because it times out, etc. isn't the most productive development environment. But I persist.

Code is updated here. Works most of the time now. Working on additional tests around restart, etc.


Wednesday, August 12, 2015

Whoops! beta5 has a new protocol for CMSensorRecorder...

I guess my earlier post noting that the batch based fetch could use an inclusive/exclusive flag is moot. Apple have removed that method call in beta5. Here are the sensor interfaces in beta5:
  • public func accelerometerDataFromDate(fromDate: NSDate, toDate: NSDate) -> CMSensorDataList?
  • public func recordAccelerometerForDuration(duration: NSTimeInterval)
Lovely. This means now, in addition to dealing with the 'up to 3 minutes' delay in retrieving data, we now need to keep NSDate fences for iterative queries. The documentation is a little ambiguous as to what data values should be used. I'll guess they are actual sample dates (document hints at 'wall time'). We'll see.

Anyway the beta5 upgrade is going ok, I'll port to a new style next (to also see if fromDate is inclusive or exclusive for returned samples). I'll also check to see if the CMSensorDataList needs to be extended to include generate().

Monday, August 10, 2015

Cleaner approach to distributing CMSensorRecorder data

I've taken another path for handling accelerometer data.  Recall:
  • CMSensorRecorder does continuous sampling, but the data visibility is slightly delayed
  • WCSession.sendMessage requires both watch and iPhone apps to be active and in focus
  • A Watch application's threads pretty much will not be scheduled when the watch is quiescent or another application is running
  • The goal is to have an application running on the Watch that does not require continuous access to the iPhone -- instead, enqueuing data for later delivery
  • Don't forget NSDate still can't be sent through WCSession... (until beta5?)
All of the above make it difficult to have continuous distribution of accelerometer data off the Watch. There still may be some technique for having a system level background thread using complications, notifications, etc. I haven't found it yet. So, for now, I will simply distribute the data when the application is running on the Watch.

Let's try WCSession.transferUserInfo() as the transfer mechanism. The flow on the Watch is roughly:
  • CMSensorRecorder starts recording as before
  • An independent dequeue enable switch will start a timer loop on Watch
  • When the timer fires it will look for and enqueue any new batches of sensor data from the recorder
This works pretty well. Sensor data is recorded, a polling loop is fetching data from the recorder and queueing the data for send to the iPhone. Then, when the iPhone is reachable, the data gets sent and handled by the phone app.

Code is here.  Some screen shot examples are here:




The astute reader will notice the difference between batchNum and lastBatch above (835 vs 836). batchNum is the next batch we are looking for and lastBatch is the last batch handled.  I think this is a problem in the api -- basically my code needs to know that batch numbers monotonically increase so it can inclusively search for the next batch after what has been handled.  This seems to be a bit too much knowledge to expose to the user.  I suggest that the call to CMSensorRecorder.accelerometerDataSince(batchNum) should be exclusive of the batch number provided.  Or, add an optional parameter (inclusive, exclusive) to keep the API more or less compatible.

Next up:
  • Getting on beta5 s/w stack (Xcode, iOS9, WatchOS2)
  • Going back to attempt native object transfer from Watch to iPhone
  • Moving the iPhone data receiver to a first class background thread
  • Adding the AWS Cognito and Kinesis support for receiving sensor data

Saturday, August 1, 2015

Relabelling cans: an afternoon at SF/Marin Food Bank

Family is back in town and we found ourselves at the SF/Marin Food Bank today.  This session turned into quite an assembly line.  Our shift wound up re-labelling and packaging 17,000 cans of Chicken Broth:


Was a nice afternoon, and we can skip the YMCA for today.

Sign up!  It is easy.

Tuesday, July 28, 2015

WatchOS 2.0 beta4: WCSession working a LOT better!

Some good news on this end. After completing the upgrade of Watch and iPhone to beta4, WCSession is working a lot better. With the same ttl source from before upgrade; reachability and connectivity makes a lot more sense. For example:
  • reachability seems to stay the same as long as the Watch and iPhone are in range.
  • it is also not a function of either application being live and in focus. This is huge as now my iPhone background thread continues to dequeue even though the Watch is displaying some other task (or is in idle mode)
  • killing the iPhone app doesn't change Watch's reachability to iPhone. (but the process timer commands are stopped as expected)
(updates)
  • the iPhone appears to suspend process timers when the application is not in focus.
  • BUT, these timers seems to wake up when the phone is awake at lock screen when the app is in focus under the lock screen!?!
  • (the above feels like some rickety support for the badge at the lower left corner of iPhone indicating when apps on the Watch are doing something interesting)
  • there is some pause or suspend of a session during a voice call too
Regardless, this does change the strategy a bit. It may now be possible to have the Watch initiate dequeue events to the iPhone (perhaps even by using the WCSession.transferUserInfo which is nicely queued for us). I will work on background threads on the Watch now, having it initiate transfers to the iPhone, instead of the other way around.

p.s. upgrade was nominal this time; configuration, Xcode symbols, paring, etc. It just worked.


Watching files copy: WatchOS 2.0 beta4 upgrade...

It appears that a key background process feature is available starting with beta4 (WXExtensionDelegate:didReceive*Notification). This is something I want to pursue primarily for background dequeue of sensor recorder data. This means several hours of watching software install (everyone should know, computers are mostly good for copying data around).

So far:

  • Xcode 7.0 beta4 upgrade is looking fine (this is a lot of software to download, unpack for install, and then verify for execution and then unpack and boot up the simulators!)
  • iOS9 beta4 upgrade is looking good. Apps appear to mostly work as or better than before (TODO: test wifi stability for a while, it has been off during beta3 work)
  • WatchOS 2.0 is forthcoming. The iPhone watch app is loaded with the new profile and shows an upgrade to beta4 is available. Tomorrow will bring 2-3 hours more watching files copy (download, get it over to watch, get it loaded and restore from backup)
Tomorrow!

Monday, July 27, 2015

Distributing Watch sensor data using WCSession

I've taken a checkpoint on the code refactoring, mostly to demonstrate message flow between the Watch and the iPhone. Features of this checkpoint:

  • Watch and iPhone display reachability on the fly
  • Watch initiates a sensor record operation
  • Now, when a dequeue operation is enabled:
    • Watch sends the dequeue switch command to iPhone
    • iPhone starts a timer
    • iPhone replies to the Watch that it completed this operation (as part of a diagnostic)
  • When timer fires on iPhone:
    • iPhone sends message to Watch asking for next batch of sensor data
    • Watch fetches up to one 'batch' of data and returns it to iPhone
    • iPhone displays some basic data about the return packet
  • Watch keeps track of how many timer commands it receives
  • iPhone keeps track of how many timer attempts it tries
Here's a part of the Watch and iPhone diagnostic displays for reference.  You will note that both sides are reachable and data has been passed to the iPhone.




This exercise is primarily using the sendMessage() method which only works when the application is live on both sides. But with a bit of careful execution, data flow is illustrated. Features and problems with this approach:

  • Reachability is (regardless of the reachable state):
    • whether the two devices can communicate
    • whether the iPhone application is in focus (for Watch)
    • whether the Watch application is in focus (for iPhone)
  • The watch can not launch the iPhone application with this approach. Similarly, fetch requests from the phone will not be delivered if the Watch isn't actively running the application
  • Also, there is a sequence where system state is incorrect:
    • start Watch app (do not start iPhone app)
    • note that the state is reachable (this is the problem)
    • enable the dequeue (notice that delivery is stuck in sending...)
  • In general, the reachability support works ok, it is possible for the Watch to go in and out of range, either device to go into airplane mode, be rebooted, application killed and restarted, etc. and all continues to work as designed.
Next up will be to work with some of the queued message types and then more importantly to figure out how to launch an iPhone background method.

Sunday, July 26, 2015

Another visit to the SF/Marin Food Bank today!

We spent another afternoon up at the SF/Marin Food Bank today. This time, oranges and packaged oats! Was nice to see so many families today -- just regular family outings. The kids really enjoyed doing the hard work, made the time go by quickly.

Here's a linked photo to give you a feel for what the work is like (thank you kqed):



Again, if you are looking for a nice afternoon with friends, family or just on your own; take a weekend morning or afternoon and go help out at the food bank. Your work does feel appreciated and given the magnitude of the operation, this is a good effort for a good cause.

You can easily schedule your time here, although some folks just show up on short notice. Depending on the day, this often just works fine.

Thursday, July 23, 2015

Forwarding CMSensorRecorder results to iPhone for display

My next step in building out a sensor data stream is to send recorded data to the iPhone. The ttl code on github is up to date. Initially this is just displayed on the phone, a bit of a hello world. However, this illustrates a few features and issues:
  • The phone and watch do not have to be connected during the call to transfer data
  • Data split into batches
  • The WCSession callbacks (data sent, error sending)
  • The serialization bug in NSDate in beta3 (send it as string...)
  • Need to figure out why NSString(format: "%s", dataFormatter.stringFromDate()) doesn't encode properly...
Here's a screen shot of what the results look like.  We see the watch processed 4082 events.



And here's the first page of the events as shown on the iPhone.



Next up, get the reader on a background thread, get the serializer working cleaner, perhaps using temporary file transfer, and then send data to Kinesis.  Next, adjust the UI to have an 'enable' recorder feature which will:
  • Kick off the recorder in small batches with a daemon thread to kick off another batch continuously if the enable switch is set
  • Have a periodic dequeue from the sensor IFF the iPhone is reachable (there is a notify for this), only sending the newly available batches.
  • Similarly, the iPhone will queue up this data (using the local Kinesis buffering) for delivery when its network is reachable
At that point there should be a durable delivery chain from watch through phone to AWS...

Tuesday, July 21, 2015

A fix for remote debugging iOS9 WatchOS2 beta3!

Many thanks to dhash. Looks like an upgrade issue, where Xcode 7 isn't quite working when state from prior Xcode exists in local directories.  From Apple Forum Thread:
dhash (edited gm)
Jul 21, 2015 1:08 AM 
I have found the solution to get debug working all the time.
  1. Quit Xcode then go to /Users/<username>/Library/Developer/Xcode/
  2. There should be 2 folders, "Watch OS DeviceSupport" and "watchOS DeviceSupport"
  3. Delete "watchOS DeviceSupport"
  4. Delete all older versions of Xcode DeveloperPortal* files. Keep the DeveloperPortal 7.0.* files.
  5. Open Xcode
  6. Delete derived data in Windows->Projects
  7. Deploy and enjoy debugging 

Now, back to a more regular development pattern!  Productivity should improve now.

Sunday, July 19, 2015

Deeper dive into CMSensorRecorder

I have modified the ttl program to have a little more control over the sensor recorder, its performance, latency, etc. Several guesses:

  • It looks like the recordAccelerometerFor(seconds) method initiates or extends a global record queue. If a new recordAccelerometerFor action is inside a range of a prior action, then effectively nothing changes. It isn't clear to me if other tasks are setting this always. Maybe one of my earlier record functions (at 1 year or 1 day) is still running?!?  Also there doesn't appear to be a stop function
  • Some documents say the maximum record ring is 3 days, others 12 hours. Perhaps it is 12 hours on the watch
  • The visibility of the data appears to be dependent on some background task that will not run when a program is active. If you keep hitting the "process" action, the data doesn't seem to change
  • The data and sensor rate make sense
The code here helps to dig into what is happening (when you don't have a debugger). Here is an operational display (in two screen-shots). Note: there is a slider scrolled off the top of the screen above 'durationMins' used to dial in the duration for the recordAccelerometerFor() method.



Here we see the last time Start was run was 21:45. And 15 minutes later, a press of the 'Process Since' shows 34k events, batch number 21868, the values for the last event and the time range for the events in that block. I do need to figure out what is going on with the min date. Again, probably some fence value; -1 or minDate or something.

So, the sensorRecorder is working -- now need to figure out the best approach to using it.

You really have to want it: Developing on Apple pre-release software...

Given the current state of debugging WatchOS2 beta3, I've been reduced to printf debugging. Basically, try to figure out what might need to be shown and build big property panels that show a few values. From this try to figure out what is really going on. This is even more fun as the only reliable means of distributing this code to the hardware is via an archive-> ad-hoc deployment through iTunes to phone!

Recap:

  • no debugging, resort to printf style debugging
  • no simulator support of accelerometer so have to debug on the hardware
  • no quick means of distributing code to hardware
  • no reliable wifi on the phone -- so need to use cellular data
  • often times this whole flow needs resets, restart, etc [i haven't had to do re-pairing lately]
I've gone through this a few times with Apple in the past, if you really want to get on a beta program be prepared to get creative. I'm fairly certain the lead for the rollup of all these features, has a thankless job; days of arguing with teams "you missed your date, and the workaround sucks".

I know this is pretty much the norm inside Apple. After all they are always working on the next version of something. I think, however, there are not too many folks inside Apple who are trying to use all the product -- sort of like an external developer would. They are big and compartmented. And a product tends to look like the organization that built it...

(Back in the day, I was 'tech lead' for a couple Sun Solaris releases; running around trying to corral a bunch of teams to make their dates.) 


Saturday, July 18, 2015

Helped out at Sacred Heart Community Service today

Today, Stanford Alumni Association arranged a volunteer shift for several of us at Sacred Heart Community Service. Spent the morning putting together food bags that'll be distributed directly next week.

Sacred Heart is quite an operation. They distribute most of their items directly to those in need. Food, non-cook food, clothing, etc. A large operation that has been doing this for over 50 years.

Our assembly line crew of about 15 volunteers bagged 1000-1100 bags of food, maybe 20-25 pounds each. Somehow I wound up on the end of the line; meaning I was the one loading these bags into the palette bins. I can skip the YMCA today that is for sure.

A nice way to spend the morning with friends and family!

Friday, July 17, 2015

Raw sensor data on the Watch -- fairly limited use

Yes, the accelerometer is accessible to an in-focus application, however, since there is no control over keeping an application in focus, using accelerometer directly is pretty much only for test. Here's a screen dump for your amusement (the code is in the ttl repo).


What you see here is a test where I move the accelerometer event stream activation to a switch. Entering the application only starts the display update. The sensor is fine, but the total count shows the updates stop when the focus goes out of the application. And there appear to be cases where the app itself is terminated -- e.g. re-entering the application shows the state as reset. That the app is suspended or sensor data isn't delivered is pretty much a showstopper for an acquisition system.

A little reading and, oops, I should have been using CMSensorRecorder. This appears to be a way to start a background recorder capturing sensor data. WWDC presentation on CoreMotion suggests up to 3 days of data can be recorded at up to 50hz.  And, Apple warns, this will have some impact on battery life, something I will next measure.


Wednesday, July 15, 2015

Limits of CoreMotion on Apple Watch

Too bad.  I have confirmed some comments from other folks testing this device.  The CoreMotion package is somewhat hardware deficient on the Watch.  Here's a screen dump from test program:


What this confirms is that only the 3-axis accelerometer is available in the Watch.  Since the gyroscope and magnetometer aren't available, the high-level motion library is also unavailable.

And, as shown in the code, the maximum delivery rate is 100 samples/second.  Incidentally, the accelerometer orientation is:
  • +X is in the right direction (toward the crown)
  • +Y is in the top direction (toward the 12 o'clock position of the watch)
  • +Z is coming out of the watch toward the viewer
Apparently LIS3DH is, or is close to the actual hardware used (thanks to Mr. Sparks for the hint).  It would be interesting to see if the sampling range and sensitivity is actually dynamic -- or if it is hard coded to +/-4G, 2mG, etc. I'll put together a test to see if this can be determined (Apple doesn't yet provide this information so any signal processing they do is a little harder to derive).

Next step will be to log the actual events to determine sample jitter, actually delivery jitter or delivery dropout as it appears that the samples are coming out of the MEMS at a constant rate. And of course to see what happens when the application isn't in foreground. Right now my code turns off the event delivery so I'll make some changes to see if the host OS actually stops delivering events when the app is out of focus. I'll need to dig into HealthKit a bit too...

Wow! finally found a workaround to run an app on the watch with beta3 software

I, like quite a few other folks, have been stuck trying to get anything to run on the hardware. Various combinations of code signing issues and so on were down the wrong path of getting a app to run. Thankfully "BJamUT" posted a reliable workaround to getting code on the phone/watch.  From Apple forums:
Jul 15, 2015 2:29 PM
Update: I can run (not debug) every time on the watch by installing an ad hoc distribution through iTunes.
  1. Product -> Archive  [You may have some code signing issues to figure out here--especially if you have multiple people who sign the builds for your organization.]
  2. Export... -> for Ad Hoc Distribution
  3. Uninstall the app from your iPhone.
  4. Drop the ipa file onto iTunes.
  5. Select your iPhone in iTunes, go to the Apps tab, and find the app you just dropped in.
  6. Click install.
  7. Click apply.
  8. Make sure that "Show App on Apple Watch" is selected in the Apple Watch app, and wait for it to install.
It should run at least.  I wasn't able to attach the debugger to the processes, but I was able to see the performance of the changes that I had already tested on the simulator.

The good news is that performance of button and table responsiveness is probably 10x that of beta 2 if you have lots of buttons.  It was tough to get it to run, but I feel like there was a treasure at the end of the hunt.
Yes, the downside is I'll have to resort to some sort of printf debugging here -- perhaps to a property panel or glance or something.  This code pattern doesn't seem to work with the debugger either.

Anyway, I can now take the code from last Saturday and distribute it into my phone via local iTunes. And the app with accelerometer processing starts up and runs even when the phone is offline or out of range!  This is the demo I have been working toward; having an application run "natively" on the Watch!

Status:

  • beta3 is reasonably stable
  • battery life is good
  • phone communication via wifi is disabled -- it is unreliable across cellular link changes
  • basic apps are working roughly
  • (waiting for next beta dump to reset all the "this works, this doesn't"...)


Sunday, July 12, 2015

China Visit Pictures

I've posted a few photos from our visit to China this year.  This trip included:
  • Sichuan: visiting cities, temples, mountains, lakes, Tibetan villages, and especially spicy food!
  • Visiting mother-in-law's newly remodeled home in Wuxi along with some tasty dumplings and banquets
  • A business trip/banquet to Shanghai 
  • Visiting more of our family in Beijing with trips to Summer palace, great wall, old town districts and more banquets
  • Another business trip/banquet in Beijing

All in all a packed two+ week trip!

Saturday, July 11, 2015

Helped out at the San Francisco Food Bank this afternoon

The Stanford Alumni association has a pretty large volunteer organization.  Today was a nice session up at the San Francisco Food Bank.  Stanford had a pretty good turn out and we had fun with food preparation.

Today, I was part of a crew packaging up bundles of snack food.  The food bank gets large pallets of food and our job is to re-package it into classroom sized portions.  Later I wound up sorting plums and nectarines.  A bit more work, and a lot more lifting.  Still the 2.5 hours went by fast.

I do recommend this to anyone looking for a great way to spend time with friends and family.  Food banks feed a lot more people than you might think (this one distributes 107,000 meals a day).  Here's a direct link to the volunteer calendar.  Pick a time and show up!


CoreMotion accelerometer working

I've updated the ttl project to include basic accelerometer output on the display.  It looks like this may be the only sensor on the Watch.  I need to dig a bit more to make sure (e.g. no gyros or magnetometer).

(And no, the simulator doesn't have this support -- so testing this would require plumbing in a synthetic data stream or some sort of unit test data source.  Still a pretty primitive ecosystem.)

Anyway, this is what the screen looks like for the ttl application (actual screen dump from the watch itself):


Other notes:
  • Debugging is exceptionally difficult.  The screen dump from above was from one of only two sessions that have actually worked on the real hardware.  The profile reset was only part of the problem.  Disabling "diagnostics to apple", iMessage, HeySiri, etc. all seem to help kick in one more session.  Perhaps the reliability of debugging is a function of the network between the watch and the phone.  Or still we're stuck in beta ware.
  • The iPhone5 may be part of the problem -- not the speediest machine out there.
  • You will notice that the above right columns are in SF font and are not constant width.  I do have an outstanding project to understand the best way to get constant width working for numbers as mentioned during the SF font overview.

I think I found the "Untrusted Developer" root cause!

Within a narrow window, I replaced the phone and upgraded to Xcode beta3.  This wound up having quite a few provisioning profiles associated with the phone known as "Greg's iPhone".  To fix:
  • Xcode
  • Devices menu
  • Select offending device
  • Right click Profiles
  • Delete all of the provisioning profiles
  • Then re-launch, re-install the applications
All the other attempts, like re-pairing phone, full restores, and so on masked the root cause, and worked typically one time.

Now back to CoreMotion library.  I've got accelerometer output working -- too bad it isn't supported on simulator.  Next up will be the CMDeviceMotion higher level data.

beta3 release of Xcode7, iOS9, WatchOS2 is out!

Seems turning off "Hello Siri" didn't help much with Watch battery life.  It is still around 7 or 8 hours when running beta2 of WatchOS.  beta3 was released, now battery burn issue is resolved!

Upgrade is quite a few long running steps (almost 6 hours):
  • Download new Xcode
  • Remove old Xcode (this seems to help with file collisions...)
  • Install new Xcode
  • Verify an app (e.g. ttl) still works in local simulator (takes a long time to boot initial images of iPhone6 and Watch in simulators)
  • Download iOS9 image
  • Turn off iMessage
  • Restore iPhone to this image (this is a LOT of steps and reboots to get apps and data back in there)
  • Re-pair watch
  • Download WatchOS2 beta3 profile
  • Send profile to email on iPhone
  • Click on that attachment from iPhone
  • Do the profile update, watch reboot cycle again
  • Now do the WatchOS upgrade to beta3
  • Turn iMessage back on
  • Retrust the developer on the phone/watch pair
  • Restart Xcode (to get Watch symbols loaded back into Xcode)
  • Resync the developer's profiles in Xcode (prefs->accounts->${user}->View Details->(resync)
  • At this point, attempt to launch the program to the real devices

Anyway, I am able to get the ttl application loaded to the Watch -- and this application demonstrates the controller running in the watch.  Now I have an application that can launch and run without the paired iPhone being in contact.  Next step: understanding sensor data.

Tuesday, July 7, 2015

Betaware: Upgrading to WatchOS 2 and iOS9

beta2 of the new WatchOS and iOS has been a bit unstable for me.  The upgrade of Xcode, iOS and WatchOS worked ok.  A few existing apps are crashing on both the phone and the watch.  But otherwise, the phone operation remains about the same.  Xcode 7 appears to reliably get applications out to the phone.

However, the WatchOS 2 upgrade isn't quite as robust.  Battery burn is high.  It may be the OS is set to some debug mode.  Or maybe there is some crash/loop program under the hood.  "Hello Siri" being on by default on the watch holds promise.  I've turned it off for now.  We'll see.

Debugging any application is a hit more miss effort.  Some folks have noticed a pattern where iPhone5 as the host may be part of the issue as it isn't as quick as newer phones.

The Swift2 porting seems straight forward.  I am trying to figure out how to get a pure standalone application running on the watch.  The goal is to demonstrate an application that continues to run in a controlled fashion when the phone is off or out of range.  I sort of have this working now (after the first launch of the application has completed while in range of the phone).

Sunday, July 5, 2015

Tried Flotation Therapy today!

I've been meaning to try flotation therapy for a while.  This is sometimes known as Sensory Deprivation, but I think flotation is more accurate.  Today was the day and I have to say, it met the hype!

Travelled up to Float Matrix in San Francisco today.  The appointment pretty much went to what is listed in their FAQ:

  • Watch the eating and drinking before the appointment
  • Arrive and take a deep exfoliating shower
  • The attendant guides you to your float pod -- you put in ear plugs
  • You get in and close the door and float on your back
  • An hour later they knock on the outside, you get out, take another shower and that's it
Anyway, the super dense Epsom Salt mix does several things:
  • Keeps you floating out of the water so your face is comfortably above water
  • Keeps your skin from getting wrinkles (salt balances)
  • Does a nice job of nourishing your skin (whatever epsom salts do)
  • Keeps the water fairly disinfected on its own
I liked it.  At the beginning, the ringing in my ears [even while in a basement, in an insulated box, with ear plugs in] was the major stimulus that remained.  Then a little vertigo; with no visual reference a slight tumbling sensation.  A very slight sensation. After a while, the vertigo and ringing subsided and I definitely felt my muscles relax -- lower back, neck, lower legs -- all sore from a hike yesterday.  And then a little later my mind did indeed calm down; stopped thinking about work and todo lists and the such and really started to enjoy to buoyancy, temperature, blackness, quiet, slow heartbeat.  The water density made it easy to float so could move arms a bit to stretch, legs, etc -- all without worrying about getting the liquid in eyes [it DOES sting I found out].

They have a nifty ventilation trick for the relatively humid air in the pod -- there is a high and low vent pipe to the outside that wind up allowing for a slight convection.  So when your head is at the far end of the pod, you get a soft flow of fresh air.  However, with the humidity, scentless epsom salts, temperature control, this was very comfortable -- no stuffed airways, no throat tickles.

Around the end I did almost doze off -- more like meditative state -- brain stopped, no motion, no sound, touch, etc.  This is the goal -- and probably why they have 90 minute sessions too.

I came out of the place nice and relaxed, joints feel good and so on.  I think I'll work to make this a regular visit.

Saturday, July 4, 2015

Build a countdown timer application

Well, here's another experiment just prior to flipping to WatchOS 2.  This application is a countdown timer that displays days/hours/minutes/seconds until a target time.

The application is mostly an exercise in date math, but it does highlight the limitations of the current OS -- that being that the application is pretty much just a remote display from the phone.  e.g. this application will not start, nor update if the paired phone is not connected.

Some other observations:

  • Not quite sure how to ensure that the UI is updated with the exact countdown time prior to display.  These two behaviors seem a bit asynchronous.  The willActivate() method seems to be called a bit late, or the display update is latent.  I'm not going to worry too much until I flip over to WatchOS 2.
  • I need to do some work with resource bundles, property pages, etc.  Basically to make this application look a bit more production ready. 
  • Same thing with OS and hardware combinations.  I don't have a good feel how general these applications are; UI layout, properties, supported features, etc.  Some learning to do here.
Code is here.

Tuesday, June 30, 2015

Purchased the Apple Watch

Apple has done a nice job with the packaging as usual.  Nice paper, nice protectors, nice unboxing experience.  I recall some earlier unboxing projects we had while I was at Sun Microsystems for one of the lower end workstations.

The watch just worked.  Took it out for a jog while calibrating the fitness application.  Nice.

Now, working to take an earlier hello world application to the watch to verify the toolchain to the hardware.