Free Content Delivery Network using DNS cache

by bysin on 2010-10-27
Programming

Why spend money on expensive CDN hosting when there’s a perfectly good, free, global one available? Thats right, DNS cache. Most open recursive DNS servers will cache requests (A, CNAME, PTR, TXT, etc.) for the length of the specified TTL value, and there’s millions of them worldwide. Once a public DNS server has the records in cache (usually after a single request), it requires no further bandwidth from the originating server.

Unfortunately there’s a limit to the size of a record a DNS server will cache, and a limit to the length of the DNS packet itself. To store files using DNS cache we must encode the file and split it into multiple records. We’re going to use TXT records for this example, which is limited by 255 characters.

file1.part1.cdn 14400 IN TXT 
"ICAgICAgQ2FuYWRhIEludmFzaW9uIFBsYW4KICAgIFRPUCBTRUNSRVQg
IENPTkZJREVOVElBTAotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tL
QoKU3RlcCAxKSBBcm0gYmVhdmVycyB3aXRoIHJpZmxlcwpTdGVw"

file1.part2.cdn 14400 IN TXT 
"IDIpIFRyYWluIG1vbmtleXMgdG8gam91c3QKU3RlcCAzKSBQcm9maXQ
KCldlIGhhdmUgYSBncm91cCB0aGF0IG1lZXRzIEZyaWRheXMgYXQgbWl
kbmlnaHQgdW5kZXIgdGhlCmJyb29rbHluIGJyaWRnZSBhbmQgdGhlIHBh"

file1.part3.cdn 14400 IN TXT 
"c3N3b3JkIGlzIHNpYyBzZW1wZXIgdHlyYW5uaXMuCg=="

The receiver simply has to request all parts of the file, reassemble, and decode it. I’ve included an example program that does just that (both CDN client and server).

# ./server --path example_data
...

# ./client --domain virtserve.com --list
Inode      Size         Path
------------------------------------
4068250    254          Epicfail.txt
4068229    283          Important_Plan.txt

# ./client --domain virtserve.com --get 4068250
<gh0st-> epicfail.c?
<matja> bysin wrote that
<bysin> its 3000 lines of nothing but preprocessor macros that turns gcc into a tetris game
<matja> if you distcc it, can you play multplayer?
<bysin> hold on, i'll #include you on the next round
<matja> thx

In the program above, the first request for a file uses the CDN server and any subsequent requests do not, since the public DNS server has it in cache. I look forward to seeing streaming videos via DNS in the future.

Click here to download the DNS CDN Source Code

{ 20 comments }

Omri October 27, 2010 at 08:06

I’m wondering how fast this is in practice. Have any data for us?

bysin October 27, 2010 at 08:45

I did an unscientific speed test and came up with 4 KB/s using the public DNS server 4.2.2.1

Brontos October 27, 2010 at 12:23

While horrible for downloading files, it seems like this would be ideal as a place for programmers to put a variable – the version number of a program.

hm2k October 27, 2010 at 14:16

This wouldn’t be suitable for software version numbers.

You may wish to update the software version number before the ttl expires.

Brontos October 27, 2010 at 15:37

Fair enough.

mike October 27, 2010 at 13:13

Great way for uncontrolled messages sending. No (communism) firewall is blocking it…

Jack October 27, 2010 at 13:56

Cory Doctorow came up with this in Little Brother. It’s quite a cool idea, I’m impressed someone’s started to implement it

sep332 October 27, 2010 at 20:02

Nope, Cory was calling out a technique invented (I think) by Dan Kaminsky, quite a bit earlier. Also, downloading files over DNS is not fast, like it is in the book.

prime idiot October 27, 2010 at 14:12

Worst.
Performance.
Ever.

Frank N Beans October 27, 2010 at 16:47

Ima give you alls a secret tipz, straight from a l0pht goddess: you can do multiple DNS lookups in parallel and achieve 1GB/s. There are zero-hour DNS torrents (aka “Namez Warez”), widely circulated on the IRC. Mmmm…. dun tell.

qbit October 27, 2010 at 15:08

clever and dangerous. i believe dns admins will change caching policies due to issues like this and cache poisoning.

Flavio October 27, 2010 at 17:23

Not new, slow and abuses public DNS servers by using their resources for unintended purposes.

glugglug October 27, 2010 at 17:29

The main limiting factor rendering most pages, which CDNs try to alleviate, is not bandwidth, but latency.
Requiring more round trips because everything is split up in such tiny pieces exacerbates the latency problem.

Also, any sane public DNS server is going to see this many subdomains being pulled into cache as a DOS attack, and few honor high TTLs in general.

dna October 27, 2010 at 17:53

I found your blog on google and read a few of your other posts. I just added you to my Google News Reader. Keep up the good work Look forward to reading more from you in the future.

Mr-Pointing-Out-The-Obvious October 27, 2010 at 17:55

I’m sorry but this is just plain stupid. Operational characteristics, feasibility, performance, programming model, *standard addressability of the data*, etc. “ByteWorm” – you are a moron.

mike October 27, 2010 at 18:13

Wow its almost as if Dan Kamisky didn’t find this 6 years ago…

moa October 27, 2010 at 18:55

wow. how abusive

Will October 29, 2010 at 06:11

How abusive. I totally agree. The whole point of a CDN is to reduce latency and increased speeds. Everything else has been said.

khemael October 29, 2010 at 20:14

Why UDP ? Serving files would be better and quite more logic in TCP, wouldnt ?

Dan Kaminsky November 2, 2010 at 12:30

It doesn’t matter that I was playing with this stuff back in 2004, this is cool code and people shouldn’t be so dismissive! There’s much more polish here than I ever built.

For the record, DNS CDNs don’t have to be slow…you just have to distribute across many, many, many servers. 38,000 servers * 20K = 700MB ;)

Comments on this entry are closed.

{ 1 trackback }

Previous post:

Next post: