Tom Insam

From Chrome’s insane password security strategy:

There are two sides to this. The developer’s side, and the user’s side. Both roles have vastly different opinions as to how the computer works. Any time I try to draw attention to this, I get the usual responses from technical people:
[..]
The computer is already insecure as soon as you have physical access

Yes. It is. And I totally agree that no non-technical person understands this, or should have to understand it.

We’ve also been repeatedly asked why we don’t just support a master password or something similar, even if we don’t believe it works. We’ve debated it over and over again, but the conclusion we always come to is that we don’t want to provide users with a false sense of security, and encourage risky behaviour. We want to be very clear that when you grant someone access to your OS user account, that they can get at everything. Because in effect, that’s really what they get.

Justin Schuh

But I believe that technical people are are fine with this not because they understand that physical machine access == “now it’s my computer” (they do, of course, that’s not the point), but because they have a much lower-level, visceral connection with their computer. it’s part of them. So of course it’s an extension of them from an authentication point of view. It’s their proxy to the universe. No-one touches my keyboard but me.

Of course, as a technical person, I know why it’s the way it is. Chrome has to type your passwords in for you. So it has to store them. The only long-term solution to this is to stop using passwords in favour of something better.

The Lanyrd Android app does the majority of its layout programatically, because the views are described by a JSON file sent from the server. There aren’t a lot of resources on how to build a set of swipe-able views with a PagerTabStrip programmatically, so here’s what I’m doing:

pager = new ViewPager(context);

// ViewPagers must have a view ID. I use http://stackoverflow.com/a/15442997/656736 to generate them.
pager.setId(MyApplication.generateViewId());

PagerTabStrip strip = new PagerTabStrip(context);
ViewPager.LayoutParams layoutParams = new ViewPager.LayoutParams();
layoutParams.height = ViewPager.LayoutParams.WRAP_CONTENT;
layoutParams.width = ViewPager.LayoutParams.MATCH_PARENT;
layoutParams.gravity = Gravity.TOP;
pager.addView(strip, layoutParams);
Edward Snowden has set up a dead man’s switch. He’s distributed encrypted copies of his document trove to various people, and has set up some sort of automatic system to distribute the key, should something happen to him. [..] I’m not sure he’s thought this through, though. I would be more worried that someone would kill me in order to get the documents released than I would be that someone would kill me to prevent the documents from being released.

My problem is as follows. I want to copy many many files from one S3 bucket to another. So many, in fact, that s3cmd explodes with Killed because the oom killer has taken exception to it listing every single file in both buckets before starting (yes, I am using the latest version).

This script assumes that the copying account can read from the source bucket. If you want to copy files from a bucket owned by a different user, give the user running the script access to the bucket by visiting the S3 console, and adding the username of the copying user (email address works fine) as a “Grantee” in the Permissions for the source bucket, with “list” permisssions.

It’s a very stupid script. It’ll enumerate every file in the source bucket, then copy it to the target bucket blindly unless there’s already a file in the target with the same name. It doesn’t do checksumming, size comparison, or anything clever. It won’t delete files in the target that don’t exist in the source. It won’t update changed files. But it’s faster than s3cmd, it starts doing things instantly, it’s pretty cheap (and safe) to kill and restart, and it’ll run on a tiny EC2 instance, so you won’t incurr bandwidth transfer charges (you’ll still pay for requests. This isn’t free).

ASSETS_AWS_ACCESS_KEY_ID = '...'
ASSETS_AWS_SECRET_ACCESS_KEY = '...'
BUCKET_FROM = "source-bucket-name"
BUCKET_TO = "target-bucket-name"

import boto # developed on boto 2.9.6.
from boto.s3.connection import S3Connection
conn = S3Connection(ASSETS_AWS_ACCESS_KEY_ID, ASSETS_AWS_SECRET_ACCESS_KEY)
source = conn.get_bucket(BUCKET_FROM)
target = conn.get_bucket(BUCKET_TO)

# .list() is a magical iterator object, it'll make
# more requests of S3 as needed
for idx, entry in enumerate(source.list()):
    if entry.name.endswith("/"):
        continue
    print idx, entry.name
    if not target.get_key(entry.name):
        # this is a trade-off. Checking for target existence makes the first
        # run slower, but subsequent runs much faster, assuming only a subset
        # of files change.
        print "..copying"
        try:
            entry.copy(dst_bucket=target, dst_key=entry.name, validate_dst_bucket=False)
        except boto.exception.S3ResponseError, e:
            # Only copying files I have access to. Never bomb out half-way.
            print e

print "all done!"

(Side note. Be nice to future maintainers - don’t use the S3 Virtual Hosting stuff. You can’t rename buckets, and you can’t re-parent them so you’re stuck using that bucket forever. Put the files in a bucket with any old name, then use Cloudfront to serve them.)