Some Observations about App Camp for Girls operations

I was sad to hear App Camp for Girls’s announcement that they will not be offering any camps this year. There’s been a deserved outpouring of support and good wishes for the future of the organization. Nobody can doubt the importance of App Camp’s mission, or the dedication of its officers, board, staff, and volunteers. But I have some observations as well.

I was an early donor to App Camp, and I have been a volunteer too. I stopped donating when I began to doubt the organization’s fiscal efficiency. I have friends who have been staff members, volunteers, and board members. I believe in App Camp’s aims, but there are some areas of execution that deserve some sunlight and scrutiny. 

Money

I’ve spent quite a bit of time over the last year and a half trying to understand how much money App Camp is raising and spending, and where the money is going. This grew out of some musings one year during the end-of-week pitch session, as I was mentally adding up the tuition and team sponsorships, on top of crowdsourced donor campaigns and corporate sponsorships. I’ve looked at their public tax returns, and also the tax returns of their original fiscal sponsor.  It’s not a simple process. Tax forms change. And the use of a fiscal sponsor means that there’s no single source of financial information. Here are the summaries for the last three years available. I expect we will be able to see 2018’s numbers in early autumn 2019.

 

2017

2016

*2015

Expense per camper per week

$5094

$2835

$2899

Girls served

51

68

48

Revenue

$266354

$158430

$74582

Expenses

$259791

$192810

$56952

On hand end-of-year

$143222

$136659

$168792

2015 is a strange year, because that’s the year App Camp’s fiscal sponsor, Technology Association of Oregon, transferred the funds it held as trustee to App Camp’s account (about $142,000). Using a fiscal sponsor is an extra step that App Camp took, in order to maximize the impact of donations. A budding 501c3 organization is not tax exempt in the beginning, but it can use an existing 501c3 during the multiyear qualifying period between the organization’s date of incorporation as a nonprofit organization and its recognition as a tax exempt one. During the startup years, donations go through the fiscal sponsor, and are tax deductible (and employer matchable) immediately. After tax exempt status is granted by the IRS, the fiscal sponsor transfers the new organization’s funds from the trustee account to the new organization, It sounds strange if you haven’t been around the nonprofit world, but it reflects current best practice and saves money for everyone. It also makes it nearly impossible to estimate total corporation expenses for the years of fiscal sponsorship by using only tax returns.

I would like to show data from 2013 and 2014, but that data is even harder to work with, because we don’t have a report of what expenses and income went through the fiscal sponsor versus being handled directly on App Camp’s books. A more thorough look at the prior years would be able to quantify how much the organization depended on an unpaid CEO in its early years.

The high expense per camper per week shocked me. I had seen the $2800/week for 2015/2016 before, but only recently received the 2017 return. The $5000 rate upset me so much I had to put the project away for a week. That’s for approximately 34 contact hours. For comparison, iD Tech Camp offers day programs with about 38 contact hours, charging approximately $900 to $1500. iD Tech Camp uses paid instructors, pays for their classroom space, and supplies their own laptops and other equipment. In many of iD Tech’s programs, the student takes home a robot, 3D printer, or electronics project. iD Tech Camp is a for-profit corporation. They turn a profit while charging $900 to $1500.

Some other cost comparisons:

  • One academic quarter of in-state resident tuition and fees at the University of Washington (state supported), up to 18 credits, was $3735 for Spring 2019.
  • A pair of 3-credit summer quarter courses at either of two local private universities (Seattle University and Seattle Pacific University) in 2019 will cost $4518 and $4716 respectively.
  • A week of top-notch professional training at Big Nerd Ranch, including very nice lodging and all meals, is $4200. Their daily commuter rate (lunch included) is $2750. A Big Nerd Ranch class has, in my experience, included 40 to 45 contact hours.

There’s one particularly troubling aspect to the revenue/expense history. In 2017, App Camp ran an Indiegogo campaign with the announced intention of raising money to expand in 2020. That campaign grossed a bit over $75,000. I think that $65,000 is a reasonable estimate of the net amount raised after expenses and commission, although that’s not been disclosed. App Camp’s net gain in cash on hand at the end of 2017 was only about $6500. Does that reflect the $65,000 from Indiegogo too? If so, I believe that implies that 90% of the 2020 expansion money was immediately eaten up by current (2017) expenses.

Where is the money going? Here’s a breakdown, using App Camp’s Schedule O or Part X from their IRS form 990 or 990-EZ. There’s a lot here that I don’t understand. Many of the expense line items strike me as quite high. A missing piece of analysis is a breakdown of spending into “organizational expenses” and “camp expenses”. That would tell the story of what it would take to scale this model up. I don’t think there’s enough information in just the tax documents to do this analysis. Further explanation would have to come directly from App Camp. 

 

2017

2016

2015

Salaries, other compensation, and employee benefits

99037

37458

0

Professional fees and other payments to independent contractors 

 

36718

2096

Occupancy, rent, utilities, and maintenance

8320

11477

8084

Printing, publications, postage, and shipping

4365

5400

4004

Other expenses (Part IX or Schedule O) detailed below:

71661

101757

44508

Camp Supplies

10996

13915

 

Conferences, conventions, and meetings

1956

9681

1528

Curriculum development

 

15130

 

Equipment

17427

29860

1609

Fees and taxes

21133

12333

 

Food and catering

 

 

8954

Information Technology

 

 

2612

Insurance

1049

1376

888

Miscellaneous expenses

1580

 

1121

Office expenses

 

 

345

Software

 

 

116

Subscriptions

 

 

334

Supplies

 

 

7741

Travel

18027

19462

12254

Volunteer background checks

 

 

551

Volunteer gifts

 

 

2940

Volunteer stipends

 

 

2713

Volunteer training courses

 

 

802

The original tax returns (including 2013 and 2014), and some other supporting documents, are available for download here. Warning: it’s a slog to get through them, and some of the documents have errors from the third-party supplier.

Outreach to potential campers

Most of the iD Tech all girls programs and girlswhocode.com programs are full or nearly full in the Seattle area as of May 2019. Why is App Camp seeing such a low turnout?

My personal opinion, as an outsider, is that communication and publicity play a role. App Camp has great visibility and presence in the iOS and tech community. But that’s a bit of an echo chamber. There aren’t a lot of 7th and 8th grade girls following that echo chamber (which is why we need App Camp in the first place!). When I checked Seattle in early spring, I didn’t see App Camp on any of the summer youth program lists and directories that I ran across. App Camp’s website was very general regarding actual curriculum. The low upfront cost ($500 tuition) was not played up much (Girls Who Code charges $2000 for a two week day camp, and I cited iD Tech prices earlier). In the absence of details on curriculum, or presence in mainstream summer camp lists, I would expect parents and students to look elsewhere. Execution on the unglamorous but difficult task of visibility is important.

Andrew Benson expressed a similar sentiment on Twitter a couple of weeks ago.

We must consider whether App Camp’s curriculum solves the problem App Camp is intending to solve. I believe that their project model of a quiz app (“what sort of xyzzy are you?”) is far more limited than is necessary. It was cutting edge when introduced, but there are now many other youth tech camps with broader offerings. I’ve met a rising 6th grader who built a Raspberry Pi based music synthesizer in a week at a competing camp. A rising 7th or 8th grader ought to be able to go further than that. Competing camps’ offerings provide ideas. A new look at the program, particularly the end-of-week focus on investors and commercial success, is warranted. The creative and collaborative aspects of software development are highlighted through the week, but don’t come out very clearly in the final “pitch day”. These are the seeds that will serve future biologists, geologists, oceanographers, and engineers who will use computers (be they iPhones or massive cloud clusters) as tools to advance their work.

There’s clearly a market (as demonstrated by iD Tech and Girls Who Code). And there’s clearly a need (look at the diversity of your own company or project team). But that doesn’t mean that App Camp’s current curriculum is the right answer. 

Future

App Camp is in a unique position right now to contribute back to the community’s knowledge.

App Camp is a fundraising behemoth. That’s lightning in a bottle. I hope they can figure out what’s behind their fundraising success and goodwill, and keep that alive in whatever their successor is.

I’d like to see App Camp publish their curriculum, under an open source license, so that other organizations can build on their success. This could include:

  • Job descriptions and skill requirements for each of the camp volunteer roles.
  • Lesson plans, learning objectives, and rubrics.
  • Sample teaching code.
  • Project starter code.
  • Volunteer training materials.
  • Lessons learned during their transition from Objective-C to Swift.

As a bonus, include a detailed budget and timeline that another organizations could use as a starting point. What equipment is needed? How much does it cost to run a week of camp? What are the critical timeline checkpoints for facilities, advertising, recruitment, instructor vetting and training, and camper registration?

App Camp has quite rightly benefitted from strong tech community support. I hope that they publish their results in a form that is another brick in the wall, for the next wave to build upon.

Advertisements

Snap Judgment #934 (Secrets of War) Is a Wonderful Work of Fiction

Walter Mitty is alive and well, and living in Snap Judgment episode 934. It’s a very entertaining episode, in the sense that any good work of historical fiction or science fiction is entertaining. Snap Judgment claims their stories are “true to the teller”.  But there are too many incredible claims in this episode for me to mark the story as anything other than fiction. Minimal fact checking in Wikipedia would have revealed these discrepancies. I’m calling BS.

The episode purports to tell the story of Jack Boyles, allegedly a 22 year old sailor on a US aircraft carrier during the Cuban Missile Crisis. He claims he was sent on a one-way mission on the island of Cuba to illuminate a missile silo for destruction by Navy bombers. 

My BS detector started wiggling when the storyteller was identified as the only yeoman aboard a US Navy aircraft carrier. I served on active duty as a yeoman myself.  A yeoman is a Navy clerk. The storyteller claims to have been a Yeoman Second Class. A second class petty officer is pay grade E-5, the same paygrade as a sergeant in the Army or Marine Corps. On my ship with a crew of about 200, we had an authorized complement of 3 yeomen. It’s ludicrous to think that a 1962 aircraft carrier would have had only one yeoman. It’s ludicrous to think that the senior yeoman on an aircraft carrier would have been only a YN2.

But let’s keep listening. After all, never let the truth get in the way of a good story.

YN2 Boyles was called to the captain’s cabin. Or office. Or maybe the wardroom. That point isn’t made clear and isn’t really important. The carrier’s captain told him he was being invited to volunteer for a one-way, top secret mission. That doesn’t add up for me. The US military doesn’t send people on suicide missions. Well, ok, maybe just this once, since it was a crisis? And they just happened to have cyanide pills on the carrier already.

The mission was to shine an illuminating rifle on one of the three “silos” that were holding missiles. This was to allow US Navy bombers to take out the “silos”. But the missiles deployed to Cuba were not based in silos. They were Soviet R-12 and R-14 missiles, surface launched. No silos. Ok, well, maybe the language was sloppy, and he said “silo” when he meant “launcher”. 

Mr. Boyles describes being ferried by helicopter to a landing zone near his assigned “silo”, and his walk through the dark woods to the “silo”. Hey wait a minute. This guy’s a clerk. There’s a US Marine Corps security detachment on every aircraft carrier. Why send a clerk when you could send a trained Marine? Well, maybe he had a lot of hunting experience and was used to moving at night through the woods. He says the mission was planned for 48 hours and was extended to 72 hours. Why do you send one person, alone, on a critical two day mission? Why not a team of two?

Oh, the kicker? Mr. Boyles claims to have conducted this operation from the USS Shangri-La. But USS Shangri-La was being overhauled in a shipyard in New York during the Cuban Missile Crisis. She arrived in Mayport, Florida in August, and left for overhaul in New York a month later. She did not participate in the Cuban Missile Crisis. Her air group deployed aboard USS Lexington.

The story ends with the description of a wonderful present from Mr Boyles’s son on his 78th birthday in spring of 2018: a carved model of the aircraft carrier he served on. I have no doubt that this gift triggered some memories and emotions. Many of those memories were probably true. But I don’t think this one is.

Cool story. But I have lost trust in Snap Judgment’s judgment. I hope they will take down the story, or clearly identify it as fiction.

Posted in Uncategorized

Privacy Consent in Mojave (part 2: AppleScript)

This two-part series discusses lessons learned in controlling the user consent for access to private information by a third part program in macOS Mojave. In Part 1 of this discussion, we saw how to query the user for consent to privacy-restricted areas, how to do it synchronously, and how to recover when your program has been denied consent.

Consent for automation (using AppleScript) is more complicated. You won’t know whether you can automate another application until you ask, and you won’t find out for sure unless the other application is running. The API for automation consent is not as well-crafted as the API for other privacy consent.

The source code for this article is the same project I used in Part 1. It is available at https://github.com/Panopto/test-mac-privacy-consent under an Apache license. The product that drove this demonstration needs automation control only for Keynote and PowerPoint, but the techniques apply to any other scriptable application. Note that this sample application is not sandboxed. You’ll need to add your own entitlements for AppleScript control if you need to be sandboxed; see https://developer.apple.com/library/archive/documentation/Miscellaneous/Reference/EntitlementKeyReference/Chapters/EnablingAppSandbox.html#//apple_ref/doc/uid/TP40011195-CH4-SW25.

 

You will want to think more carefully about whether to ask your user for automation permission, and when to ask. You don’t want to bombard your customer with a large number of requests for control of applications that won’t be relevant to the task at hand. For the Panopto video recorder, we don’t ask for permission to control Keynote or PowerPoint until we see that someone is recording a presentation and is running Keynote or PowerPoint. If you’re running just Keynote, we won’t ask for PowerPoint access. One other wrinkle for automation consent that’s different from media consent: you only have one string in your Info.plist to explain what you’re doing. You can have separate (localizable) strings to explain each of camera, microphone, calendar, and so on. But Automation gets only one explanation, presented for each application you want to automate. You’ll have to be creative, perhaps adding a link to your own website with further explanation.

 

Screen Shot 2018 09 03 at 5 21 53 PM

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The newer beta versions of macOS Mojave provide an API to query the automation consent status for a particular application: the C API AEDeterminePermissionToAutomateTarget(). , defined in AppleEvents.h.You’ll call that with an AppleEvent descriptor, created either with Core Foundation or with NSAppleEventDescriptor. The descriptor targets one specific external application using the external application’s bundle identifier; you’ll need a different descriptor for each external application you want to control. Here’s how to set it up, using the C style API just for fun (you were expecting Swift???):

 

– (PrivacyConsentState)automationConsentForBundleIdentifier:(NSString *)bundleIdentifier promptIfNeeded:(BOOL)promptIfNeeded

{

    PrivacyConsentState result;

    if (@available(macOS 10.14, *)) {

        AEAddressDesc addressDesc;

        // We need a C string here, not an NSString

        const char *bundleIdentifierCString = [bundleIdentifier cStringUsingEncoding:NSUTF8StringEncoding];

        OSErr createDescResult = AECreateDesc(typeApplicationBundleID, bundleIdentifierCString, strlen(bundleIdentifierCString), &addressDesc);

        OSStatus appleScriptPermission = AEDeterminePermissionToAutomateTarget(&addressDesc, typeWildCard, typeWildCard, promptIfNeeded);

        AEDisposeDesc(&addressDesc);

        switch (appleScriptPermission) {

            case errAEEventWouldRequireUserConsent:

                NSLog(@”Automation consent not yet granted for %@, would require user consent.”, bundleIdentifier);

                result = PrivacyConsentStateUnknown;

                break;

            case noErr:

                NSLog(@”Automation permitted for %@.”, bundleIdentifier);

                result = PrivacyConsentStateGranted;

                break;

            case errAEEventNotPermitted:

                NSLog(@”Automation of %@ not permitted.”, bundleIdentifier);

                result = PrivacyConsentStateDenied;

                break;

            case procNotFound:

                NSLog(@”%@ not running, automation consent unknown.”, bundleIdentifier);

                result = PrivacyConsentStateUnknown;

                break;

            default:

                NSLog(@”%s switch statement fell through: %@ %d”, __PRETTY_FUNCTION__, bundleIdentifier, appleScriptPermission);

                result = PrivacyConsentStateUnknown;

        }

        return result;

    }

    else {

        return PrivacyConsentStateGranted;

    }

 

}

There’s an unfortunate choice made in AppleEvents.h to wrap the definition of result code errAEEventWouldRequireUserConsent in a #ifdef that defines it only for macOS 10.14 and higher. I want my code to work on earlier releases too, so I’ve added my own conditional definition to work on earlier versions. If you do the same thing, you’ll probably have to fix your code when Apple fixes their header:

// !!!: Workaround for Apple bug. Their AppleEvents.h header conditionally defines errAEEventWouldRequireUserConsent and one other constant, valid only for 10.14 and higher, which means our code inside the @available() check would fail to compile. Remove this definition when they fix it.

#if __MAC_OS_X_VERSION_MIN_REQUIRED <= __MAC_10_14

enum {

    errAEEventWouldRequireUserConsent = –1744, /* Determining whether this can be sent would require prompting the user, and the AppleEvent was sent with kAEDoNotPromptForPermission */

};

#endif

Finally, let’s wrap this up in a shorter convenience call:

 

NSString *keynoteBundleIdentifier = @”com.apple.iWork.Keynote”;

– (PrivacyConsentState)automationConsentForKeynotePromptIfNeeded:(BOOL)promptIfNeeded

{

    return [self automationConsentForBundleIdentifier:keynoteBundleIdentifier promptIfNeeded:promptIfNeeded];

}

 

Caution: this code will not always give you a useful answer. If the automated program is not running, you won’t know the state of consent, even if you’ve been granted consent previously. You’ll want to test whether the automated program is running, or react to changes in NSWorkspace’s list of running applications, or perhaps even launch the automated application yourself. It’s worth taking some time to experiment with the buttons on the sample application when your scripted app is running, not running, never queried for consent, or previously granted/denied consent. In particular, methods like showKeynoteVersion will not work correctly when the scripted application is not running.

 

Screen Shot 2018 09 04 at 8 17 25 PM

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

We can nag for automation consent, just as we do for camera and microphone consent. But the Security & Privacy Automation pane behaves differently. It does not prompt the user to restart your application. So let’s add a warning in the nag screen, in hopes of warding off at least a few support requests.

 

Screen Shot 2018 09 04 at 10 57 33 AM

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Automation consent is more complicated than media and device consent. Felix Schwarz, Pauloa Andrade, Daniel Jalkut, and several others have written about the incomplete feel of the API. This pair of posts is meant to show you how to ship software today with the API that we have today.

Privacy Consent in Mojave (part 1: media and documents)

MacOS Mojave brings new user control over applications’ access to user data, camera, microphone, and AppleScript automation. This two-part series describes our experience adopting the new privacy requirements in the Panopto Mac Recorder. We needed to smooth out the process for camera, microphone, and AppleScript, but our approach will work for any of the dozen or so privacy-restricted information categories.

 

Because the Panopto Mac Recorder is a video and audio capture application, we need to comply with Camera and Microphone privacy consent. Any call to AVFoundation that would grant access to camera or microphone data triggers an alert from the system, and an opportunity for the user to grant or deny access. 

 

However, the view controller that needs camera and microphone access has multiple previews, and a live audio level meter. The calls from AVFoundation to request access are asynchronous. That means that bringing up that one view controller triggers six different alerts in rapid succession, each asking for camera or microphone access. That’s not a user experience we want to present.

 

I talked with Tim Ekl about the problem. He said that Omni Group was using a single gatekeeper object to manage all of their privacy consent requests. That’s the approach we decided to take. A singleton PrivacyConsentController is now responsible for handling all of the privacy consent requests, and for recovering from rejection of consent.

 

The source code for PrivacyConsentController is available at https://github.com/Panopto/test-mac-privacy-consent under an Apache license.

 

The method -requestAccessForMediaType: on AVCaptureDevice requests access for audio and video devices. It takes a completion handler (as.a block), which is fired asynchronously after a one-time UI challenge. If the user has previously granted permission for access, the completion handler fires immediately. If it’s the first time requesting access, the completion handler fires after the user makes their choice. 

 

For simplicity’s sake, we require that the user grant access to both the camera and the microphone before we proceed to the recording preview screen. We ask for audio access first, and then, in the completion handler, ask for camera access. Finally, in the completion handler for the camera request, we fire a developer-supplied block on the main thread.

 

We need to support macOS versions back through 10.11. So we’ll wrap the logic in an @available clause, and always invoke the completion handler with a consent status of YES for macOS prior to 10.14. We track the consent status in a property, with a custom PrivacyConsentState enum having values for granted, denied, and unknown. We use the custom enum because the AVAuthorizationStatus enum (returned by –authorizationStatusForMediaType:) is not defined prior to 10.14, and we want to know the status on earlier OS versions.

 

There’s another complication, though. The user alert for each kind of privacy access (camera, microphone, calendar, etc) is only presented once for each application. If they clicked “grant”, that’s great, and we’re off and running. If they clicked “deny”, though, we’re stuck. We can’t present another request via the operating system, and we can’t bring up our recording preview.

 

Enter the nag screen. The nag screen points the user to the correct Privacy & Security pane. We will show the nag screen (optionally, depending on a parameter to our gatekeeper method) from the completion handler if permission is not granted.

 

Putting it all together, here’s what the IBAction looks like for macOS 10.14, with the guard code in place, restricting access to the AVFoundation-heavy view controller until we get the consent we need. This code works all the way back to macOS 10.11.

 

– (IBAction)newRecording:(id)sender

{

    [[PrivacyConsentController sharedController] requestMediaConsentNagIfDenied:YES completion:^(BOOL granted) {

        if (granted) {

            [self openCreateRecordingView];

        }

    }];

}

 

– (void)openCreateRecordingView

 

{

}

 

Here’s the entry point for media consent:

 

Screen Shot 2018 09 03 at 5 18 43 PM

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

– (void)requestMediaConsentNagIfDenied:(BOOL)nagIfDenied completion:(void (^)(BOOL))allMediaAccessGranted

{

    if (@available(macOS 10.14, *)) {

        [AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio completionHandler:^(BOOL granted) {

            if (granted) {

                self.microphoneConsentState = PrivacyConsentStateGranted;

            }

            else {

                self.microphoneConsentState = PrivacyConsentStateDenied;

            }

            [AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler:^(BOOL granted) {

                if (granted) {

                    self.cameraConsentState = PrivacyConsentStateGranted;

                }

                else {

                    self.cameraConsentState = PrivacyConsentStateDenied;

                }

                if (nagIfDenied) {

                    dispatch_async(dispatch_get_main_queue(), ^{

                        [self nagForMicrophoneConsentIfNeeded];

                        [self nagForCameraConsentIfNeeded];

                    });

                }

                dispatch_async(dispatch_get_main_queue(), ^{

                    allMediaAccessGranted(self.hasFullMediaConsent);

                });

            }];

        }];

    }

    else {

        allMediaAccessGranted(self.hasFullMediaConsent);

    }

}

 

The call to -requestAccessForMediaType: is documented as taking some time to fire its completion handler. That is in fact the case when you’re asking for consent for the first time. But on the second and subsequent requests, the completion handler is in practice invoked immediately, with granted set to the user’s previous answer.

 

Here’s a sample nag screen, to recover from a denial of consent:

 

Screen Shot 2018 09 03 at 5 19 01 PM

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

– (void)nagForMicrophoneConsentIfNeeded

{

    if (self.microphoneConsentState == PrivacyConsentStateDenied) {

        NSAlert *alert = [[NSAlert alloc] init];

        alert.alertStyle = NSAlertStyleWarning;

        alert.messageText = @”Panopto needs access to the microphone”;

        alert.informativeText = @”Panopto can’t make recordings unless you grant permission for access to your microphone.”;

        [alert addButtonWithTitle:@”Change Security & Privacy Preferences”];

        [alert addButtonWithTitle:@”Cancel”];

        

        NSInteger modalResponse = [alert runModal];

        if (modalResponse == NSAlertFirstButtonReturn) {

            [self launchPrivacyAndSecurityPreferencesMicrophoneSubPane];

        }

    }

}

 

How do we respond to the alert? By linking to a URL that is not officially documented, using the x-apple.systempreferences: scheme. I worked out the URLs by starting with the links at https://macosxautomation.com/system-prefs-links.html, and then applied some guesswork. You can see many of the URL targets I found in the source code at https://github.com/Panopto/test-mac-privacy-consent.

 

– (void)launchPrivacyAndSecurityPreferencesMicrophoneSubPane

{

    [[NSWorkspace sharedWorkspace] openURL:[NSURL URLWithString:@”x-apple.systempreferences:com.apple.preference.security?Privacy_Microphone”]];

}

Take note: when you’re working with camera, microphone, calendar, reminders, and other media-based access, your program’s privacy consents will NEVER change from PrivacyConsentStateDenied to PrivacyConsentStateGranted within a single run of your program. The user must quit and restart your program for the control panel’s consent to take effect. For standard media/calendar/reminders consent, your users will see a reminder to quit and restart your app. We will see in the next post that this is NOT the behavior for AppleScript consent.

Screen Shot 2018 09 04 at 10 56 24 AM

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

For testing, use the command line invocations “tcc reset All”, “tcc reset Camera”, “tcc reset Microphone”, or “tcc reset AppleEvents”.

Next up, in a separate post: how do we deal with AppleScript consent requests? It’s a bit more complicated.

Privacy risks from iOS photo metadata

There’s a ton of very personal information associated with a photo that you take with your smartphone. By default, the phone captures all of the camera settings (aperture, shutter speed, focal length). But it also captures location and timestamp. The timestamp and location from a photo, or series of photos, can be used by a domestic violence perpetrator to infer places a victim frequents, and their patterns of travel. When this information is posted via a photo sharing service or social media account, it can be an unexpected (and even unknown, silent) privacy breach. 

A Twitter conversation the other night prompted this post. A very senior graphics engineer was surprised to see how much of her personal information and travel patterns was exposed to a stalker ex-partner via photo sharing. The location history revealed by someone’s photo stream is at least as rich (and endangering) as the direct location history determined from GPS. That’s a dangerous privacy breach. If it caught a senior engineer by surprise, imagine how many non-technical smartphone customers are at risk!

All of this photo reference information is commonly referred to as metadata, but that’s an imprecise technical buzzword. Properly written messaging and photo sharing apps will educate the customer about what’s being captured, shared, and posted. “It’s not just the photo, but we’re going to tell the world where you were and when you were there. And once we post it, that information will be available forever, and indexed by all of your favorite search engines.” Many apps won’t be quite that honest. And many customers won’t pay attention. If they do pay attention, they might not remember, years later, that they had given permission, when domestic violence becomes a possibility or reality.

Apple can help this by making an iOS app’s photo sharing permissions more granular.

At the moment, there are three levels of permission for access to the camera and the camera roll. They are defined in Cocoa Keys. They are: NSPhotoLibraryAddUsageDescription (write-only access to the photo library);  NSCameraUsageDescription (direct capture of the camera image); and NSPhotoLibraryUsageDescription (full read-write access to the photo library’s images and metadata).

An additional level of granularity, call it NSPhotoLibraryImagesUsageDescription, would help. This proposed new setting would allow an app to read the images in the photo library. It would not allow photo editing, metadata editing, or metadata viewing. If a customer grants NSPhotoLibraryImagesUsageDescription access to an app, that app cannot (deliberately or inadvertently) share the customer’s position history via photos. The privacy fence would be enforced by the operating system. And that’s exactly what we want an operating system to do.

I’ve filed this as rdar://33421676 with Apple. Dupe freely!

I have no idea what the analogous answer for Android is. Drop me a note if you know, and I’ll update this post.

Updating SceneKit WWDC 2013 slides for Xcode 7

With recent changes to the AppKit headers, you need to make a couple of changes to the WWDC 2013 SceneKit Slides code to get it to build. There are some cool examples in that year’s talk/sample code that didn’t make it into 2014’s.

In the ASCPresentationViewController, switch from a method declaration for the -view superclass override to a property in the header, and specify @dynamic for that property in the implementation.

@property (strong) SCNView *view;

//- (SCNView *)view;

 

@dynamic view;

//- (SCNView *)view {

//    return (SCNView *)[super view];

//}

I also updated the .xcodeproj to current standards, and fixed a couple of int/NSInteger/NSUinteger mismatches.

I’ve submitted it to Apple as rdar://23829155. In the meantime, here are the diffs:

diff --git a/SceneKit_Slides_WWDC2013/Scene Kit Session WWDC 2013/Sources/ASCPresentationViewController.h b/SceneKit_Slides_WWDC2013/Scene Kit Session WWDC 2013/Sources/ASCPresentationViewController.h
index 7d66316..bb0e54f 100644
--- a/SceneKit_Slides_WWDC2013/Scene Kit Session WWDC 2013/Sources/ASCPresentationViewController.h
+++ b/SceneKit_Slides_WWDC2013/Scene Kit Session WWDC 2013/Sources/ASCPresentationViewController.h
@@ -55,7 +55,9 @@
@property (weak) id <ASCPresentationDelegate> delegate;

// View controller
-- (SCNView *)view;
+// Hal Mueller change: make this a property, @dynamic, to compile under Xcode 7/10.11 SDK
+@property (strong) SCNView *view;
+//- (SCNView *)view;
- (id)initWithContentsOfFile:(NSString *)path;

// Presentation outline
diff --git a/SceneKit_Slides_WWDC2013/Scene Kit Session WWDC 2013/Sources/ASCPresentationViewController.m b/SceneKit_Slides_WWDC2013/Scene Kit Session WWDC 2013/Sources/ASCPresentationViewController.m
index 46d9e00..1c914b6 100644
--- a/SceneKit_Slides_WWDC2013/Scene Kit Session WWDC 2013/Sources/ASCPresentationViewController.m
+++ b/SceneKit_Slides_WWDC2013/Scene Kit Session WWDC 2013/Sources/ASCPresentationViewController.m
@@ -91,9 +91,10 @@ typedef NS_ENUM(NSUInteger, ASCLightName) {

#pragma mark - View controller

-- (SCNView *)view {
- return (SCNView *)[super view];
-}
+@dynamic view;
+//- (SCNView *)view {
+// return (SCNView *)[super view];
+//}

- (id)initWithContentsOfFile:(NSString *)path {
if ((self = [super initWithNibName:nil bundle:nil])) {
@@ -660,12 +661,12 @@ typedef NS_ENUM(NSUInteger, ASCLightName) {

#pragma mark - Misc

-CGFloat _lightSaturationAtSlideIndex(int index) {
+CGFloat _lightSaturationAtSlideIndex(NSInteger index) {
if (index >= 4) return 0.1; // colored
return 0; // black and white
}

-CGFloat _lightHueAtSlideIndex(int index) {
+CGFloat _lightHueAtSlideIndex(NSInteger index) {
if (index == 4) return 0; // red
if (index == 5) return 200/360.0; // blue
return 0; // black and white
diff --git a/SceneKit_Slides_WWDC2013/Scene Kit Session WWDC 2013/Sources/ASCSlideTextManager.m b/SceneKit_Slides_WWDC2013/Scene Kit Session WWDC 2013/Sources/ASCSlideTextManager.m
index ce17c6f..cdc12a4 100644
--- a/SceneKit_Slides_WWDC2013/Scene Kit Session WWDC 2013/Sources/ASCSlideTextManager.m
+++ b/SceneKit_Slides_WWDC2013/Scene Kit Session WWDC 2013/Sources/ASCSlideTextManager.m
@@ -71,7 +71,7 @@ static CGFloat const TEXT_FLATNESS = 0.4;
return self;
}

-- (NSColor *)colorForTextType:(ASCTextType)type level:(int)level {
+- (NSColor *)colorForTextType:(ASCTextType)type level:(NSUInteger)level {
switch (type) {
case ASCTextTypeSubtitle:
return [NSColor colorWithDeviceRed:160/255.0 green:182/255.0 blue:203/255.0 alpha:1];

Options for Full Text Search in Core Data

Last weekend Chris Olds and I were discussing text search engines, and in particular how to take advantage of them to speed up searches of free-form text in Core Data. Here’s a summary of what we found. I haven’t tested or implemented any of these ideas. This is simply a summary of what’s out there.

I’m not including techniques that deal with fast searches of short text fields: normalizing your query strings and searchable text, using case-insensitive searches, etc. That’s all well documented by Apple and in the usual Core Data reference books.

I did run across one very cool article outlining a profiling method I hadn’t ever seen before. The Art & Logic Blog goes one step further in the typical use of com.apple.CoreData.SQLDebug. Take advantage of the fact that you have SQLite installed on your Mac! You can paste the SQL query being logged by your iOS app into SQLite on your Mac, and use the EXPLAIN QUERY command there to understand the search plan.

Full Text Search

Full text search (FTS) is about finding search terms within large bodies of text. This is different from matching someone’s last name to the lastName attribute in a Core Data entity. Imagine instead that your Core Data database contains notes, or newspaper articles, or patent descriptions, or travel resort reviews, and you want to search within the text of those articles. The brute force method is to scan all of the text of each article, searching for matches to the search term. That takes a very long time, and doesn’t always give you the results you want.

Ideally, your FTS within Core Data will respond as quickly as Google or Bing does when you enter a search term. The results will be ranked by relevance, The search will handle word stemming correctly: if I enter a search for “lodge”, I probably want to see results containing “lodges” or “lodging”, too. Core Data does not handle any of these need.

Roll Your Own

Michael Heyeck wrote an 8 part series of blog articles describing how to build your own FTS capability directly within Core Data, using only Core Data tools and constructs. It’s a very comprehensive series, and it’s a shame it isn’t more widely known. He doesn’t just teach you how to do FTS in Core Data. He also shows you how to read and understand the SQL queries that are generated on your behalf, and how to modify your NSPredicates and data model design to make the queries fast.

The series includes source code for a Notes application with FTS, under BSD license.

Search Kit

When you type something into the Spotlight search bar on your Mac, you’re using FTS. Mac OS X has already built an FTS index of the files on your system, and queries that index. Search Kit is the Foundation framework that Apple uses to deliver those search results, and it’s available to you too. The catch? It’s Mac only, and not integrated into Core Data.

When we were chatting, I mentioned to Chris that Search Kit would make a terrific NSHipster topic. The next day, that’s what happened! The NSHipster article also summarizes the technical issues in Full Text Search nicely.

Indragie Karunaratne has a project on Github that uses Search Kit to back Core Data searches. I’ve only read over the source, and haven’t tried it, but it looks solid. His approach is to build a Search Kit index that returns NSManagedObjectIDs of Core Data objects matching a particular full text search.

Commercial Library

Locayta makes their FTS mobile search engine available to iOS developers: free for non-commercial use, $1000 per commercial app. It’s not integrated with Core Data. An approach similar to the one Indragie Karunaratne took with Search Kit integration would probably work, though.

Hackery

The backing store most commonly used with Core Data, SQLite, includes FTS support. It’s just not exposed in any Core Data API (at least, not as of iOS 6.1).

Wolfert de Kraker describes a technique for using the SQLite FTS4 engine simultaneously with Core Data. It involves creating a Virtual Table within the same SQLite database that Core Data uses. Then he uses FMDB to create a search method which uses the FTS4 search to respond to UISearchDisplayController delegate calls. NSManagedObjectIDs are returned as the raw SQLite search results, and then Core Data retrieves these objects.

This 2010 Stack Overflow answer describes a similar approach. A different answer a few months later makes a sideways variation: instead of storing NSManagedObjectIDs in the shadow SQLite table, store SQLite row IDs as Core Data attributes.

These solutions included a custom copy of SQLite in their projects. Although they are iOS projects, I see no reason you couldn’t use the same approach on OS X.

I found two other blog posts describing other implementations of this approach, one from Regular Rate & Rhythm and one from Long Weekend Mobile, both from 2010.

I have to say that it makes me very nervous to think of mucking around in Core Data’s SQLite file. Call me superstitious.

Open Source FTS

We looked at two long-established open source FTS engines, Xapian and Lucene.

Lucene is a Java-based search engine, part of the Apache project. A port to Gnustep, Lucene Kit, was begun in 2005 and seems to have languished for a while. The most current version I found was https://github.com/zbowling/LuceneKit, which was active as recently as 2012.

Xapian is a C++ search engine, and the one that Chris uses in his production code. It is presently licensed under GPL, which would make for some complications if you were to include it  in an iOS project. There was some mention on the Xapian forum of writing an Objective-C binding. The conclusion was that it should be straightforward, but that no one has done it yet.