Google Developers Blog: compressorhead
COLLECTED BY
Organization:
Internet Archive
These crawls are part of an effort to archive pages as they are created and archive the pages that they refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the version that was live when the page was written will be preserved.
Then the Internet Archive hopes that references to these archived pages will be put in place of a link that would be otherwise be broken, or a companion link to allow people to see what was originally intended by a page's authors.
The goal is to
fix all broken links on the web .
Crawls of supported "No More 404" sites.
A daily crawl of more than 200,000 home pages of news sites, including the pages linked from those home pages. Site list provided by
The GDELT Project
The Wayback Machine - https://web.archive.org/web/20181009162514/https://developers.googleblog.com/search/label/compressorhead
By Colt McAnlis, Google Developer Advocate
The next five billion humans who come online will be doing so from parts of the world where connectivity is costly and slow. With the average website approaching 2 megabytes in size and the average Android game approaching 125 megabytes, users in these markets will have to make a tough choice between content and cost. Compression algorithms, which address this issue, will become critically important over the next decade.
VIDEO
Most developers are content to let compression be someone else’s problem. But the truth is that these algorithms sit in the intersection of optimization, information theory, and pragmatism. These videos will take us through the history of information theory, explain why compression matters, and show how different algorithm families approach this challenge.
Compressor Head, Episode 1 (Variable Length Codes )
Understanding compression algorithms means understanding how humans view and use data. Colt explores the creation of Information Theory, and how it’s spawned the concept of variable length codes, which since the early 1950s have been at the heart of data compression algorithms.
Compressor Head, Episode 2 (The LZ Compression Family )
In the world of compression, one algorithm family reigns supreme. Born in the late 1970s, the Lempel-Ziv algorithms have become the most dominant dictionary encoding schemes in compression. This episode explains why these algorithms are so dominant.
Compressor Head, Episode 3 (Markov Chain Compression )
At the cutting edge of compression algorithms sits the lonely kingdom of Markov Chains. These algorithms adopt an Artificial Intelligence approach to compression by allowing the encoder and decoder to ‘predict’ what data is coming next. In this episode you’ll learn how these magical algorithms compress data, and why some think that they are the future of compression.
While the world of compression is focused on making things smaller, we’re going big with a set of three YouTube videos introducing modern developers to the world of compression algorithms. And they’re all available now, exclusively on our Google Developers YouTube channel at http://g.co/compressorhead .
Colt McAnlis is a games developer advocate who believes every bit counts and that performance matters. He is a Udacity course instructor on HTML5 games and a Book Author . When he's not working with developers, Colt’s been known to compress games, buildings and mountains with his bare hands.
Posted by Louis Gray , Googler