Showing posts with label Xcode. Show all posts
Showing posts with label Xcode. Show all posts

Tuesday, May 31, 2016

Efficient Apple Watch CMSensorRecorder transport to AWS

Well, I have come full circle. A long time ago, this experiment ran through Rube Goldberg system #1:

  • Dequeue from CMSensorRecorder
  • Pivot the data
  • Send it via WCSession to the iPhone
  • iPhone picked up the data and queued it locally for Kinesis
  • Then the Kinesis client transported data to Kinesis
  • Which had a Lambda configured to dequeue the data
  • And write it to DynamoDB
Then, I flipped the data around in an attempt to have the Watch write directly to DynamoDB:
  • Dequeue from CMSensorRecorder
  • Pivot the data to a BatchPutItem request for DynamoDB
  • Put the data to DynamoDB
  • (along the way run the access key Rube Goldberg machine mentioned earlier)
The problems with both of these approaches are the cost to execute on the Watch and the Watch's lack of background processing. This meant it was virtually impossible to get data dequeued before the Watch app went to sleep.

I did a little benchmarking over the weekend and found that brute force dequeue from CMSensorRecorder is fairly quick. And the WCSession sendFile support can run in the background, more or less. So, I will now attempt an alternate approach:
  • Dequeue from CMSensorRecorder
  • Minimal pivot of data including perhaps raw binary to a local file
  • WCSession:sendFile to send the file to the iPhone
  • Then iPhone gets the file and sends it itself to AWS (perhaps a little pivot, perhaps S3 instead of DynamoDB, etc.)
  • (along the way a much simpler access key machine will be needed)
The theory is that this'll get the data out of the Watch quickly during its limited active window.

We'll see...

Saturday, May 28, 2016

The limits of AWS Cognito

Well, after a slight hiatus, I spent a little time understanding how to use AWS Cognito in an application. I've now got a more or less running Cognito-as-STS-token-generator for the Apple Watch. Features:

  • Wired to any or all of Amazon, Google, Twitter or Facebook identity providers
  • Cognito processing occurs on the iPhone (hands STS tokens to Watch as Watch can't yet run the AWS SDK)
  • Leverage Cognito's ability to 'merge' identities producing a single CognitoID from multiple identity providers
  • Automatic refresh of identity access tokens
Here's the iPhone display showing all the identity providers wired:

Ok, the good stuff. Here what the access key flow now looks like:


There are a lot of actors in this play. They key actor for this article is the IdP. Here, as a slight generalization across all the IdPs, we have a token exchange system. The iPhone maintains the long lived IdP session key from the user's last login.  Then, the iPhone performs has the IdP exchange the session key for a short-lived access key to present to Cognito. For IdP like Amazon and Google, the access key is only good for an hour and must be refreshed...

Let me say that again; today, we need to manually refresh this token for Cognito before asking Cognito for an updated STS token! Cognito can't do this! FYA read Amazon's description here: "Refreshing Credentials from Identity Service" 

Especially in our case, where our credentials provider (Cognito) is merely referenced by the other AWS resources, we need to intercept the Cognito call to make sure that on the other side of Cognito, the 'logins' are up to date.

So, I replumbed the code to do just this (the 'opt' section in the above diagram). Now, a user can log in once on the iPhone application and then each time the Watch needs a token, the whole flow tests whether or not an accessKey needs to be regenerated.

For reference, here's the known lifetimes of the various tokens and keys:
  • The Watch knows its Cognito generated STS Token is good for an hour
  • Amazon accessTokens are good for an hour (implied expire time)
  • Google accessToken is good until an expire time (google actually returns a time!)
  • Twitter doesn't have expire so its accessKey is unlimited
  • Facebook's token is good for a long time (actually the timeout is 60 days)
  • TODO: do any of the IdPs enforce idle timeouts? (e.g. a sessionKey has to be exchanged within a certain time or it is invalidated...)
So, with all these constants, and a little lead time, the Watch->iPhone->DynamoDB flow looks pretty robust. The current implementation is still limited to having the Watch ask the iPhone for the STS since I haven't figured out how to get the various SDKs working in the Watch. I don't want to rewrite all the IdP fetch codes, along with manual calls to Cognito.

Plus, I'm likely to move the AWS writes back to the iPhone as the Watch is pretty slow.

The code for this release is here. The operating code is also in TestFlight (let me know if you want to try)

Known bugs:
  • Google Signin may not work when the app is launched from the Watch (app crashes)
  • Facebook login/logout doesn't update the iPhone status section
  • The getSTS in Watch is meant to be pure async -- I've turned this off until its logic is a bit more covering of various edge cases.
  • The webapp should also support all 4 IdP (only Amazon at the moment)




Sunday, February 7, 2016

sensor2 code cleanup -- you can try it too

After a bit of field testing, I've re-organized the sensor2 code to be more robust. Release tag for this change is here. Major changes include:
  • The Watch still sends CMSensorRecorder data directly to DynamoDB
  • However, the Watch now asks the iPhone for refreshed AWS credentials (since the AWS SDK isn't yet working on Watch, this avoids having to re-implement Cognito and login-with-amazon). This means that with today's code, the Watch can be untethered from the iPhone for up to an hour and can still dequeue records to DynamoDB (assuming the Watch has Wi-Fi access itself)
  • If the Watch's credentials are bad, empty or expired and Watch can't access the iPhone or the user is logged out of the iPhone part of the app, then Watch's dequeuer loop is stopped
  • Dependent libraries (LoginWithAmazon) are now embedded in the code
  • A 'logout' on the phone will invalidate the current credentials on the Watch
This code should now be a bit easier to use for reproducing my experiments. Less moving parts, simpler design. I'll work on the README.md a bit more to help list the steps to set up.

And finally, this demonstrates multi-tenant isolation of the data in DynamoDB. Here's the IAM policy for logged in users:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "mobileanalytics:PutEvents",
                "cognito-sync:*",
                "cognito-identity:*"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Sid": "Stmt1449552297000",
            "Effect": "Allow",
            "Action": [
                "dynamodb:BatchWriteItem",
                "dynamodb:UpdateItem",
                "dynamodb:Query"
            ],
            "Resource": [
                "arn:aws:dynamodb:us-east-1:499918285206:table/sensor2"
            ],
            "Condition": {
                "ForAllValues:StringEquals": {
                    "dynamodb:LeadingKeys": [
                        "${cognito-identity.amazonaws.com:sub}"
                    ]
                }
            }
        }
    ]
}

In the above example the important lines are the condition -- this condition entry enforces that only rows with HashKey the same as the logged in user's cognitoId will be returned. This is why we can build applications with direct access to a data storage engine like DynamoDB!

You can read the details of IAM+DynamoDB here.

Anyway, back to performance improvements of the dequeue process. Everything is running pretty good, but the Watch still takes a long time to get its data moved.

Wednesday, January 6, 2016

A note on sensor2 dequeue performance

I've examined sensor2 dequeue performance. Some interesting observations indeed!
  • A single dequeue loop (1250 samples for 25 seconds) time takes a bit over 7 seconds
  • A little under 1 second of this time is getting data from CMSensorRecorder
  • Around 4 seconds is required to prepare this data
  • The time to send the samples to DynamoDB depends on the network configuration:
    • 3 - 6 seconds when the Watch is proxying the network through iPhone using LTE network (with a few bars of signal strength)
    • 2 - 4 seconds when the Watch is proxying the network through iPhone (6s plus) and my home WiFi
    • Around 1.5 seconds when the Watch is directly connecting to network using home WiFi

Speeding up the data preparation will help some.  I will set a goal of 1 second:
  • Hard coded JSON serializer
  • Improvements to the payload signer
  • Reduce the HashMap operations (some clever pivoting of the data)

Sunday, January 3, 2016

Progress: CMSensorRecorder directly to DynamoDB

Relative to before, the pendulum has swung back to the other extreme: a native WatchOS application directly writing to AWS DynamoDB.  Here, we see a screen grab with some events being sent:



This has been an interesting exercise. Specifically:
  • With iOS 9.2 and WatchOS 2.1, development has improved
  • However, I can't yet get the AWS iOS SDK to work on the Watch directly
  • So, I have instead written code that writes directly to DynamoDB
    • Including signing the requests
    • Including implementing low level API for batchWriteItem and updateItem
  • I have also redone the application data model to have a single DynamoDB row represent a single second's worth of data with up to 50 samples per row
    • Initially, samples are indexed using named columns (named by the fraction of a second the sample is in)
    • Later this should be done as a more general documentDB record
    • This approach is a more efficient use of DynamoDB -- provisioning required is around 2 writes/second per Watch that is actively dequeuing (compared to 50 writes/second when a single sample is stored in a row)
  • This application also uses NSURLSession directly
  • This means that the Watch can send events to DynamoDB using configured WiFi when the iPhone is out of range!
  • I have also redone the command loop using GCD dispatch queues (instead of threads)
Anyway, it appears to be doing the right thing. Data is being recorded in CMSensorRecorder, the dequeue loop is processing data and transmitting up to 1250 samples (25 seconds) of data per network call. The custom request generator and call signing are doing the right thing. Perhaps a step in the right direction? Not quite sure:
  • I see that the actual on-Watch dequeue processing takes about 6 seconds for 25 seconds worth of data. Since all of the data preparation must occur on the Watch (there is no middle man), the additional work of pivoting the data, preparing the DynamoDB request are borne by the Watch.
  • Profiling shows the bulk of this processing time is in JSON serialization!
  • Another approach would be minimal processing on the Watch.  e.g. "dump the raw data to S3" and let an AWS Lambda take care of the detailed processing. This is probably the best approach although not the cheapest for an application with many users.
  • I'm now running tests long enough to see various memory leaks! I've been spending a bit of time with the memory allocator tools lately...
    • I have run into a few with the NSURLSession object
    • The JSON serializer also appears to leak memory
    • Possibly NSDateFormatter also is leaking memory
Here's what a dequeue loop looks like in the logs. You can see the blocks of data written and the loop processing time:

Jan  3 21:05:11 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: dequeueLoop(1)
Jan  3 21:05:11 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: flush itemCount=23, minDate=2016-01-04T05:01:54.557Z, maxDate=2016-01-04T05:01:54.998Z, length=2621
Jan  3 21:05:12 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: data(Optional("{}"))
Jan  3 21:05:13 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: commit latestDate=2016-01-04 05:01:54 +0000, itemCount=23
Jan  3 21:05:13 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: dequeueLoop(2)
Jan  3 21:05:13 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: flush itemCount=49, minDate=2016-01-04T05:01:55.018Z, maxDate=2016-01-04T05:01:55.980Z, length=5343
Jan  3 21:05:14 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: data(Optional("{}"))
Jan  3 21:05:14 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: commit latestDate=2016-01-04 05:01:55 +0000, itemCount=72
Jan  3 21:05:15 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: dequeueLoop(3)
Jan  3 21:05:20 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: flush itemCount=1250, minDate=2016-01-04T05:01:56.000Z, maxDate=2016-01-04T05:02:20.988Z, length=88481
Jan  3 21:05:23 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: data(Optional("{\"UnprocessedItems\":{}}"))
Jan  3 21:05:23 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: commit latestDate=2016-01-04 05:02:20 +0000, itemCount=1322
Jan  3 21:05:23 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: dequeueLoop(4)
Jan  3 21:05:30 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: flush itemCount=1249, minDate=2016-01-04T05:02:21.008Z, maxDate=2016-01-04T05:02:45.995Z, length=88225
Jan  3 21:05:32 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: data(Optional("{\"UnprocessedItems\":{}}"))
Jan  3 21:05:32 Gregs-AppleWatch sensor2 WatchKit Extension[152] <Warning>: commit latestDate=2016-01-04 05:02:45 +0000, itemCount=2571

And here is what a record looks like in DynamoDB. This shows the columnar encoding of a few of the X accelerometer samples:


I have a checkpoint of the code here. Note that this code is somewhat hard coded for writing only to my one table with only AWS authorizations to write.

TODO:
  • Update the UI to help explore this data
  • See if there is a more efficient use of the JSON serializer
  • Examine some of the framework memory leaks
  • Try to speed up the dequeue to be better than 6 seconds of wall clock for 25 seconds of data.

Thursday, November 19, 2015

CMSensorRecorder from Watch to AWS

I've done a couple of experiments with the direct networking from the WatchOS 2.0 using NSURLSession. Based on the results, I now have a strategy for getting the iPhone out of the loop.

Recall the current plumbing looks like this:

  • CMSensorRecorder data collected in Watch
  • Dequeued by a native application on the Watch
  • Sent via WCSession to iPhone
  • iPhone uses AWS KinesisRecorder to buffer received events
  • KinesisRecorder sends to AWS (using Cognito credentials)
  • Kinesis is dequeued to an AWS Lambda
  • The Lambda stores the records to DynamoDB (using a fixed IAM role)
This is fine, but has a lot of moving parts. And more importantly, Kinesis is provisioned at expected throughput and you pay by the hour for this capacity.

My next experiment will look like this:
  • CMSensorRecorder data collected in Watch
  • Dequeued by a native application on the Watch
  • Sent directly to AWS IoT (using on-watch local long lived IAM credentials)
  • An IoT rule will send these events to Lambda
  • The Lambda stores the records to DynamoDB (using a fixed IAM role)
The important difference here is that IoT is charged based on actual use, not provisioned capacity. This means the effective costs are more directly related to use and not expected use!

Also, the Watch will be able to directly transmit to AWS instead of going through the iPhone (well, it does proxy through its paired phone if in range, otherwise it goes direct via WIFI if a known network is near). This feature will be an interesting proof of concept indeed.

Anyway, for this first experiment the writes to IoT are via http POST (not via MQTT protocol). And authentication is via long lived credentials loaded into the Watch, not Cognito. This is primarily because I haven't yet figured out how to get the AWS IoS SDK to work on the Watch (and the iPhone at the same time). So, I am instead using low-level AWS Api calls. I expect this will change with a future release of the SDK.

I have toyed with the idea of having the Watch write directly to DynamoDB or Lambda instead of going through IoT. Since the Watch is already buffering the sensor data, I don't really need yet another reliable queue in place. Tradeoffs:
  1. Send via IoT
    • + Get Kinesis-like queue for reliable transfer
    • + Get some runtime on IoT and its other capabilities
    • - Paying for another buffer in the middle
  2. Send direct to Lambda
    • + One less moving part
    • - To ensure sending data, need to make a Synchronous call to Lambda which can be delayed when writing to DynamoDB, not sure how well this will work on networks in the field
  3. Send direct to DynamoDB
    • + The lowest cost and least moving parts (and lowest latency)
    • - DynamoDB batch writes can only handle 25 items (0.5 seconds) of data
Note: on DynamoDB, earlier I had discussed a slightly denormalized data storage scheme. One where each second of data is recorded on one dynamoDB row (with separately named columns per sub-second event). Since DynamoDB can do no-clobber updates, this is a nice tradeoff of rows vs data width. This would change the data model and the reader would need to take this into account, but this may make the most sense no matter what. Basically doing this gets better utilization of a DynamoDB 'row' by compacting the data as much as possible. This probably reduces the overall cost of using DynamoDB too as the provisioning would, in general, be reduced. So, I may just re-do the data model and go direct to DynamoDB for this next POC.

Stay tuned!

Sunday, October 25, 2015

Apple Watch Accelerometer displayed!

There you have it! A journey started in June has finally rendered the results intended. Accelerometer data from the Watch is processed through a pile of AWS services to a dynamic web page.

Here we see the very first rendering of a four second interval where the watch is rotated around its axis. X, Y and Z axes are red, green, blue respectively. Sample rate is 50/second.

The accelerometer data itself is mildly interesting. Rendering it on the Watch or the iPhone were trivial exercises. The framework in place is what makes this fun:
  • Ramping up on WatchOS 2.0 while it was being developed
  • Same with Swift 2.0
  • Getting data out of the Watch
  • The AWS iOS and Javascript SDKs
  • Cognito federated identity for both the iPhone app and the display web page
  • A server-less data pipeline using Kinesis, Lambda and DynamoDB
  • A single-page static content web app with direct access to DynamoDB
No web servers, just a configuration exercise using AWS Paas resources. This app will likely be near 100% uptime, primarily charged per use, will scale with little intervention, is logged, AND is a security first design.

Code for this checkpoint is here.

Thursday, October 15, 2015

Apple Watch Accelerometer -> iPhone -> Kinesis -> Lambda -> DynamoDB

I've been cleaning up the code flow for more and more of the edge cases. Now, batches sent to Kinesis include Cognito Id and additional instrumentation. This will help when it comes time to troubleshoot data duplication, dropouts, etc. in the analytics stream.

For this next pass, the Lambda function records the data in DynamoDB -- including duplicates. The data looks like this:



The Lambda function (here in source) deserializes the event batch and iterates through each record, one DynamoDB put at a time. Effective throughput is around 40 puts/second (on a table provisioned at 75/sec).

Here's an example run from the Lambda logs (comparing batch size 10 and batch size 1):

START RequestId: d0e5b23a-54f1-4be8-b100-3a4eaabfbced Version: $LATEST
2015-10-16T04:10:46.409Z d0e5b23a-54f1-4be8-b100-3a4eaabfbced Records: 10 pass: 2000 fail: 0
END RequestId: d0e5b23a-54f1-4be8-b100-3a4eaabfbced
REPORT RequestId: d0e5b23a-54f1-4be8-b100-3a4eaabfbced Duration: 51795.09 ms Billed Duration: 51800 ms Memory Size: 128 MB Max Memory Used: 67 MB
START RequestId: 6f430920-1789-43e1-a3b9-21aa8f79218e Version: $LATEST
2015-10-16T04:13:22.468Z 6f430920-1789-43e1-a3b9-21aa8f79218e Records: 1 pass: 200 fail: 0
END RequestId: 6f430920-1789-43e1-a3b9-21aa8f79218e
REPORT RequestId: 6f430920-1789-43e1-a3b9-21aa8f79218e Duration: 5524.53 ms Billed Duration: 5600 ms Memory Size: 128 MB Max Memory Used: 67 MB

Recall, the current system configuration is:
  • 50 events/second are created by the Watch Sensor Recorder
  • These events are dequeued in the Watch into batches of 200 items
  • These batches are sent to the iPhone on the fly
  • The iPhone queues these batches in the onboard Kinesis recorder
  • This recorder flushes to Amazon every 30 seconds
  • Lambda will pick up these flushes in batches (presently a batch size of 1)
  • These batches will be written to DynamoDB [async.queue concurrency = 8]
The Lambda batch size of 1 is an interesting tradeoff.  This results in the lowest latency processing. The cost appears to be around 10% more work (mostly a lot more startup/dispatch cycles).

Regardless, this pattern needs to write to DB faster than the event creation rate...

Next steps to try:
  • Try dynamo.batchWriteItem -- this may help, but will be more overhead to deal with failed items and provisioning exceptions
  • Consider batching multiple sensor events into a single row. The idea here is to group all 50 events in a particular second into the same row. This will only show improvement if the actual length of an event record is a significant fraction of 1kb record size
  • Shrink the size of an event to the bare minimum
  • Consider using Avro for the storage scheme
  • AWS IoT
Other tasks in the queue:
  • Examine the actual data sent to DynamoDB -- what is are the actual latency results?
  • Any data gaps or duplication?
  • How does the real accelerometer data look?
  • (graph the data in a 'serverless' app)

Sunday, September 27, 2015

Cognito based credentials finally refreshing

It turns out I had it wrong all along. See here's the flow:
  • Cognito is mapping an identity to a STS based role
  • We need to ask Cognito to refresh the credentials directly (not just the provider refresh)
Now, there is some debate as to whether this part of the SDK is obeying the refresh contract. So, for now I have this construct in the 'flush to kinesis' flow:
    if (self.credentialsProvider.expiration == nil |  self.credentialsProvider.expiration.timeIntervalSinceNow < AppDelegate.CREDENTIAL_REFRESH_WINDOW_SEC) {
            let delegate = AuthorizeUserDelegate(parentController: self.viewController)
            delegate.launchGetAccessToken()
            NSLog("refreshd Cognito credentials")
  }
This winds up trigger the usual Cognito flow. And if a persistent identity is in the app, then this finally does the right thing. Simulator based transmit now does token refresh reliably over many hours, or many versions of STS tokens.

Also, this version of the code is updated based on the release versions of Xcode 7, iOS 9 and watchOS 2. Everything is running fairly smoothy. There are still a couple of areas I'm investigating:

  • The WCSession:sendMessage seems to get wedged in a certain sequence. Watch sends a message, is waiting for a reply, phone gets message, then watch goes to sleep. The phone has processed the message and is blocked on the reply to the watch. This doesn't seem to get unwedged any way other than waiting for a 2 or 5 minute timeout.
  • This particular code does get into an initial block state if the phone is locked. This looks to be something where the accelerometer sensor needs to check with the phone to see if user has granted access to sensor.
Both of the above are a bit more than minor inconveniences. The first means that even IF living with the watch app going to sleep often, you still can't reliably transfer a bunch of data to the phone using the sendMessage method. The second means it is not clean for starting the app on the watch when the phone is locked or out of range. Maybe there is a reason. But really, we are at a point where getting the sensor data out of the watch for anywhere close to near-realtime processing isn't yet realized.


Sunday, September 13, 2015

Sensor: running well on iOS 9 Seed and WatchOS 2

I've made a checkpoint of the sensor code that corresponds to the iOS9 GM seed and WatchOS 2.0. The release tag is here. Note, this code is configured to generate synthetic data, even on the hardware. I'm using this to prove the robustness of the Watch -> iPhone -> AWS connections across noisy connections.

I've cleaned up the transport a bit to send JSON directly from the Watch. This goes across the WCSession to the iPhone.  The iPhone does parse the data to examine it and update it's engineering display. But, really this raw JSON payload is sent directly to Kinesis.

Here's a screen dump of a AWS Lambda parsing the Kinesis flow. This Lambda simple prints the JSON, enough to show what is being sent:



This code runs pretty well in background mode on the iPhone. The data flow continues even while the phone is locked, or working on another application. This key concept shows iPhone as a buffered proxy to AWS.

Next up, handling a few error cases a bit better:
  • When the watch goes in and out of range
  • When the phone goes in and out of being able to reach AWS
  • And of course, when the watch goes to sleep (real goal is to keep being able to dequeue from CMSensorRecorder while watch is asleep)

Sunday, August 30, 2015

CMSensorRecorder data reliably flowing to Kinesis

I've refactored the iPhone side of the code a bit to better represent background processing of the sensor data. The good thing is that WCSession:sendMessage handles iPhone background processing properly. This is at the expense of having to handle reachability errors in code. The checkpoint of code is here.

Now the flow is roughly:
  • On Watch
    • CMSensorRecorder is activated and is recording records locally regardless of the application state of the Watch
    • When the sensor application is in the foreground, a thread attempts to dequeue data form the recorder
    • And when this data is received a WCSession:sendMessage is used to send the data to the iPhone in the background
    • Iff a valid reply comes back from this message, the CMSensorRecorder fetch position is updated to fetch the next unprocessed sensor data
  • On iPhone
    • A background thread is always ready to receive messages from the Watch
    • Those messages are saved to a local Kinesis queue
    • A timer based flush will submit this Kinesis queue to AWS
    • AWS credentials from Cognito are now manually refreshed by checking the credentials expire time
    • The send and submit kinesis calls are now asynchronous tasks
So this is pretty close to continuous feed on the iPhone side.

Some areas of durability to re-explore next:
  • How to build a Watch dequeue that can run when the application isn't in foreground?
  • Is there another way for WCSession to send to a background task other than sendMessage?
  • How reliable is the sendMessage call?
    • When the iPhone is out of range
    • When the iPhone is locked
    • When it is busy running another application
    • I do see some transient 'not paired' exceptions when sending volume
  • While this does allow for automatic background processing, is there a simpler way of transferring data that doesn't require the application handling reachability errors?
  • How reliable is the Kinesis send-retry when the iPhone can't reach AWS?
I will next be building more quantitative checks of the actual data sent through the system to understand where data get sent more than once, or where it is lost.

Wednesday, August 26, 2015

Xcode 7 beta6: Bitcode issues between WatchOS and iPhone solved!

Getting there!  A quick upgrade to Xcode 7 beta6 fixed this issue.  We now have data transfer from Watch Accelerometer to CMSensorRecorder to Watch app to iPhone to Kinesis -- yes data is flowing, mostly resiliently too; with intermittent focus, connectivity, etc.  Here is the code.

And some screen dumps (explained below):


The Watch screen dump is pretty much as before.  You will see the Cognito integration (using Amazon as an identity provider).  The first 4 lines are the identity details.  The next 4 lines are information regarding the Kinesis storage, the STS token expire time, the amount of local storage consumed, how many flushes to Kinesis have occurred, and when.

Of course the current code still relies on these transfer operations being in focus, an ongoing area of research as to how to make this a background operation on both the Watch and on the iPhone.  But still, real data is finally in Kinesis.

TODO: build an auto-refreshing STS token, as this appears to be a known problem.

Next up, write an AWS Lambda function to read from Kinesis, parse the records and then put them into DynamoDB.  Once that is there, a visualization example both on iOS and on a Server-less web service...


Sunday, August 23, 2015

Marketing: make something look like what is intended, not what it is

Well, this has been a depressing past couple of days. This was the time to re-integrate the AWS SDK back into the application in preparation for sending data to Kinesis. I had basic Cognito and Kinesis hello world working back in June on WatchOS 1.0. I'd mistakenly assumed that some sort of compatibility over time would be in order. Not to be the case. Summary:
Yes, it is possible to disable the enforcement of the TLS1.2 requirement. And this I did, I am able to get a set of temporary keys for calls to AWS services. How many applications are going to have to do this? All of them?

Worse, it doesn't look possible to use the current AWS SDK with a Watch application. This looks like a pretty ugly show stopper:
  • 3rd party library doesn't have bitcode support. While you can disable this on the iPhone,
  • Watch and iPhone have to have the same bitcode support level. And the Watch requires bitcode enabled.
Think about what this means!  7-8 years worth of iPhone 3rd party library support out there and probably used by a few applications. And these libraries will NOT work with any application that wants to bundle with WatchOS 2.0. The proverbial 'what were they thinking?' comes to mind.

So, I'm stuck: can't integrate with 3rd party library until rebuilt...

The calendar looks rough for Apple:
  • September 9th announcements; WatchOS2 and some new hardware
  • Then they turn on the holiday season marketing; "buy our watch, it has apps"
  • In the mean time, a mountain of developers are trying to figure out how to ship anything on the Watch
  • New message "trust us, our developers will eventually catch up"


Tuesday, August 18, 2015

WatchOS 2.0 beta5: development productivity increases!

I've been trying to characterize a beta5 issue where the CMSensorRecorder system doesn't reliably dequeue. There is some combination of being on the charger and then off and then starting a new debug session where the thing just works. For now, juggling with the Watch on and off the charger along with the occasional restart of debugger and I've managed to get 10 or more debug sessions. Ridiculous productivity tonight! The result is some progress on understanding how the system works.

As of now, the sample program:
  • Can kick off the CMSensorRecorder for a defined number of minutes
  • When the Watch application is in foreground, it will dequeue whatever records are found and then send them to the iPhone
  • The iPhone will dequeue the records and test them for consistency by looking for gaps in the sensor record date stamps.
Here is what the displays look like now:


Above, you will see the new gapError count. This is a consistency of data check; a quick check to ensure that all data is sequential and updating at 50Hz (the count of 1 is expected; the first element since program launch).

Updated code is here.

Next up is to plumb AWS back into the rig to send this data to Kinesis...

Sunday, August 16, 2015

WatchOS beta5: CMSensorRecorder working again (almost)

The Journey of Instability is alive and well.

I have been working the last 4 days to get around this error when attempting to dequeue CMSensorRecorder messages (filed as bug 22300167):
Aug 16 18:23:43 Gregs-AppleWatch sensor WatchKit Extension[174] <Error>: CoreLocation: Error occurred while trying to retrieve accelerometer records!
During a couple dozen debug attempts I was able to see data flowing only once.

Again, it is fun to be on the pre-release train with Apple. Yes, they work hard to improve features at the expense of stability. The balance between getting things working and presenting a stable interface is a tough one to manage. I know this looks better inside the company, where folks can know in advance to not use experimental interfaces. From the outside, the every-two-week, do it a different approach way is tiring. This will slow the adoption of novel features I'm sure.

Here's what the workflow was like to get to this point:
  • beta5 upgrade of XCode, iOS9, WatchOS2 (~ 5 hours)
  • discover that the metadata in the existing sensor project is somehow incompatible
  • manually copy over the code to a newly generated project, now simulator and hardware are working (~ 8 hours)
  • port the sensor recorder using the new protocol (a mere 20 minutes)
  • after numerous sessions to real hardware (simulator doesn't support accelerometer) realize that I don't know how to work around the above bug (~ 8 hours)
As many of you have lived; the repeated reboots, hard starts, 'disconnect and reconnect', yet another restart because the iPhone screen is wedged, restart debug session because it times out, etc. isn't the most productive development environment. But I persist.

Code is updated here. Works most of the time now. Working on additional tests around restart, etc.


Wednesday, August 12, 2015

Whoops! beta5 has a new protocol for CMSensorRecorder...

I guess my earlier post noting that the batch based fetch could use an inclusive/exclusive flag is moot. Apple have removed that method call in beta5. Here are the sensor interfaces in beta5:
  • public func accelerometerDataFromDate(fromDate: NSDate, toDate: NSDate) -> CMSensorDataList?
  • public func recordAccelerometerForDuration(duration: NSTimeInterval)
Lovely. This means now, in addition to dealing with the 'up to 3 minutes' delay in retrieving data, we now need to keep NSDate fences for iterative queries. The documentation is a little ambiguous as to what data values should be used. I'll guess they are actual sample dates (document hints at 'wall time'). We'll see.

Anyway the beta5 upgrade is going ok, I'll port to a new style next (to also see if fromDate is inclusive or exclusive for returned samples). I'll also check to see if the CMSensorDataList needs to be extended to include generate().

Monday, August 10, 2015

Cleaner approach to distributing CMSensorRecorder data

I've taken another path for handling accelerometer data.  Recall:
  • CMSensorRecorder does continuous sampling, but the data visibility is slightly delayed
  • WCSession.sendMessage requires both watch and iPhone apps to be active and in focus
  • A Watch application's threads pretty much will not be scheduled when the watch is quiescent or another application is running
  • The goal is to have an application running on the Watch that does not require continuous access to the iPhone -- instead, enqueuing data for later delivery
  • Don't forget NSDate still can't be sent through WCSession... (until beta5?)
All of the above make it difficult to have continuous distribution of accelerometer data off the Watch. There still may be some technique for having a system level background thread using complications, notifications, etc. I haven't found it yet. So, for now, I will simply distribute the data when the application is running on the Watch.

Let's try WCSession.transferUserInfo() as the transfer mechanism. The flow on the Watch is roughly:
  • CMSensorRecorder starts recording as before
  • An independent dequeue enable switch will start a timer loop on Watch
  • When the timer fires it will look for and enqueue any new batches of sensor data from the recorder
This works pretty well. Sensor data is recorded, a polling loop is fetching data from the recorder and queueing the data for send to the iPhone. Then, when the iPhone is reachable, the data gets sent and handled by the phone app.

Code is here.  Some screen shot examples are here:




The astute reader will notice the difference between batchNum and lastBatch above (835 vs 836). batchNum is the next batch we are looking for and lastBatch is the last batch handled.  I think this is a problem in the api -- basically my code needs to know that batch numbers monotonically increase so it can inclusively search for the next batch after what has been handled.  This seems to be a bit too much knowledge to expose to the user.  I suggest that the call to CMSensorRecorder.accelerometerDataSince(batchNum) should be exclusive of the batch number provided.  Or, add an optional parameter (inclusive, exclusive) to keep the API more or less compatible.

Next up:
  • Getting on beta5 s/w stack (Xcode, iOS9, WatchOS2)
  • Going back to attempt native object transfer from Watch to iPhone
  • Moving the iPhone data receiver to a first class background thread
  • Adding the AWS Cognito and Kinesis support for receiving sensor data

Tuesday, July 28, 2015

WatchOS 2.0 beta4: WCSession working a LOT better!

Some good news on this end. After completing the upgrade of Watch and iPhone to beta4, WCSession is working a lot better. With the same ttl source from before upgrade; reachability and connectivity makes a lot more sense. For example:
  • reachability seems to stay the same as long as the Watch and iPhone are in range.
  • it is also not a function of either application being live and in focus. This is huge as now my iPhone background thread continues to dequeue even though the Watch is displaying some other task (or is in idle mode)
  • killing the iPhone app doesn't change Watch's reachability to iPhone. (but the process timer commands are stopped as expected)
(updates)
  • the iPhone appears to suspend process timers when the application is not in focus.
  • BUT, these timers seems to wake up when the phone is awake at lock screen when the app is in focus under the lock screen!?!
  • (the above feels like some rickety support for the badge at the lower left corner of iPhone indicating when apps on the Watch are doing something interesting)
  • there is some pause or suspend of a session during a voice call too
Regardless, this does change the strategy a bit. It may now be possible to have the Watch initiate dequeue events to the iPhone (perhaps even by using the WCSession.transferUserInfo which is nicely queued for us). I will work on background threads on the Watch now, having it initiate transfers to the iPhone, instead of the other way around.

p.s. upgrade was nominal this time; configuration, Xcode symbols, paring, etc. It just worked.


Watching files copy: WatchOS 2.0 beta4 upgrade...

It appears that a key background process feature is available starting with beta4 (WXExtensionDelegate:didReceive*Notification). This is something I want to pursue primarily for background dequeue of sensor recorder data. This means several hours of watching software install (everyone should know, computers are mostly good for copying data around).

So far:

  • Xcode 7.0 beta4 upgrade is looking fine (this is a lot of software to download, unpack for install, and then verify for execution and then unpack and boot up the simulators!)
  • iOS9 beta4 upgrade is looking good. Apps appear to mostly work as or better than before (TODO: test wifi stability for a while, it has been off during beta3 work)
  • WatchOS 2.0 is forthcoming. The iPhone watch app is loaded with the new profile and shows an upgrade to beta4 is available. Tomorrow will bring 2-3 hours more watching files copy (download, get it over to watch, get it loaded and restore from backup)
Tomorrow!

Monday, July 27, 2015

Distributing Watch sensor data using WCSession

I've taken a checkpoint on the code refactoring, mostly to demonstrate message flow between the Watch and the iPhone. Features of this checkpoint:

  • Watch and iPhone display reachability on the fly
  • Watch initiates a sensor record operation
  • Now, when a dequeue operation is enabled:
    • Watch sends the dequeue switch command to iPhone
    • iPhone starts a timer
    • iPhone replies to the Watch that it completed this operation (as part of a diagnostic)
  • When timer fires on iPhone:
    • iPhone sends message to Watch asking for next batch of sensor data
    • Watch fetches up to one 'batch' of data and returns it to iPhone
    • iPhone displays some basic data about the return packet
  • Watch keeps track of how many timer commands it receives
  • iPhone keeps track of how many timer attempts it tries
Here's a part of the Watch and iPhone diagnostic displays for reference.  You will note that both sides are reachable and data has been passed to the iPhone.




This exercise is primarily using the sendMessage() method which only works when the application is live on both sides. But with a bit of careful execution, data flow is illustrated. Features and problems with this approach:

  • Reachability is (regardless of the reachable state):
    • whether the two devices can communicate
    • whether the iPhone application is in focus (for Watch)
    • whether the Watch application is in focus (for iPhone)
  • The watch can not launch the iPhone application with this approach. Similarly, fetch requests from the phone will not be delivered if the Watch isn't actively running the application
  • Also, there is a sequence where system state is incorrect:
    • start Watch app (do not start iPhone app)
    • note that the state is reachable (this is the problem)
    • enable the dequeue (notice that delivery is stuck in sending...)
  • In general, the reachability support works ok, it is possible for the Watch to go in and out of range, either device to go into airplane mode, be rebooted, application killed and restarted, etc. and all continues to work as designed.
Next up will be to work with some of the queued message types and then more importantly to figure out how to launch an iPhone background method.