what aspects of Blockstack/Gaia could ideas from Sirius perhaps fix? no direct support for files readable only by a set of people or that can be written by only a set no direct support for files writeable by a particular set of people no defense against Gaia serving stale versions of files the intended scenario (2003) is a lot like Athena and AFS workstations, LAN, file servers directories, files users share files, ro and rw some files only accessible by particular sets of users they didn't want to trust their servers! but: for authors, seems like a thought experiment Blockstack's situation is similar: decentralized apps don't want to trust Gaia servers plus we have Blockstack PKI to turn names into public keys why don't we want to trust the servers? maybe p2p, run by people trying to steal or modify our data maybe commercial, but corrupt employees maybe hackers have broken into the servers maybe the servers run buggy software what bad things could the servers try to do? directly read or modify data conspire with users who have keys -- we don't trust all the users either discard our data, or ignore writes not enforce write restrictions write correct data to the wrong file serve correct data from the wrong file serve an old version of a file show different clients different information ("equivocation") does signing and encrypting stop bad server behavior? ACCESS CONTROL how does Sirius grant (or withold) read permission on a file? why the symmetric key? (FEK in Figure 2) why a separate symmetric key for each file? if I want to grant read permission to Alice, how do I find her public key? can server illegally grant permission by adding a new encrypted key block? e.g. copy from another file's meta-data can server illegally revoke permission by deleting an encrypted key block? how do I *revoke* read permission? 3.11 why change FEK and re-encrypt? after all, if user has already read file, too late, no point? if they haven't read the file, isn't deleting key block enough? ... they may have read file, remembered FEK, and we don't want them to be able to read *future* updates why does the key block contain the Username? Figure 2 (need it to find pub key when re-encrypting for revocation) who is allowed to change the read permissions on a file? what if I'm on vacation when my co-workers need to add someone new? what if we have 100s of files we want to grant access to? e.g. a big repository, and someone new joins the team? e.g. piazza, with lots of existing posts, and a new student joins class? how do I grant *write* permission to Alice for one of my files? how does Alice write one of my files? in what sense does this forbid writes by unauthorized people? what if the file server modifies the file? what if the file server substitutes a different signed file? what should a reader do if a signature check fails? how do I revoke write permission? FRESHNESS what do they mean by fresh? why do they focus on freshness of meta-data? what's the worrying attack? I delete Alice's write permission new FSK, same content signed with new FSK she conspires with file server, which reverts to old md and content now Alice can sign new content, others will accept it should we worry about freshness for data (file content)? examples of bad things enabled by lack of data freshness? grades file, version 1 says rtm gets an A, version 2 says rtm gets a B file holds list of people who have a government security clearance name allocations; financial transactions why do we think they didn't guarantee data freshness? let's think about how to ensure freshness from untrusted servers. this turns out to be a serious recurring problem in many systems we'll look at, so it's worth pondering. freshness is hard because it seems to require not just checking state, but checking that nothing is *missing*. could we just put a timestamp in each file's meta-data? updated to current time whenever I write the meta-data? how would readers decide if OK? (check if recent) can participants get a secure source of time? how about if, every minute, I rewrite every one of my files' meta-datas? rewrite with timestamp = now how do readers verify? why don't we like this scheme? when re-writing, how do I know if I'm re-writing the freshest md? to reduce cost: every minute, write sign(hash(all md's), time) to server? also update the big hash when changing any file's MD is this faster? less writing? less md reading? how do readers verify? (do they need to read EVERY file's MD?) when re-writing, how do I know if I'm re-writing the freshest md? how to avoid reading all files' md's whenever we update root md? this is Sirius' scheme (tree of hashes) the tree's point is to decrease update/verify cost ... diagram of hash tree, with subdir why cheaper to update why cheaper to verify "Merkle tree," shows up a lot with tree, what happens if server supplies a stale md-file? does this provide a "guarantee of freshness"? that is, are all meta-data reads guaranteed to be fresh? (non-owner readers know only that it was fresh a minute ago) what if I am not online for a while? what if I update file x's meta-data from workstation W1, turn W1 off, instantly login into W2, and update file y's meta-data in the same directory as x. W2 has to generate a signed root mdf-file from the md-file of x (as well as y). How does W2 ensure it has a fresh md-file for x? how can we do better? trusted hash server, with each user's latest hash maybe then also we wouldn't need the every-minute thing, or timestamps are we OK with this kind of trust? is there an entity we all trust? maybe every user signs every other user's root hash maybe then only one user has to be online and signing why does Sirius have a tree per user, not a global tree? could security be improved with a global tree? or efficiency? what should a client do if freshness validation fails? CONCLUSION do we need mutability? do we need multiple people to be able to write a data item? what have we learned? ideas for end-to-end access control, in the face of untrusted servers ideas to verify freshness, in the face of untrusted servers access control worries do we need groups? will re-keying on revocation be too expensive? freshness worries root hash expensive to update and validate one-minute window of non-freshness owner must be online all the time, with private key owner must remember own last root hash not clear if Sirius supports multiple devices per user trusted hash server helps -- but centralized trust is disturbing what to do if freshness verification fails? do we need data freshness, as well as meta-data?