In March 2007 Blaise Aguera y Arcas presented Seadragon & Photosynth at TED that created quite some buzz around the web. About a year later, in March 2008, Microsoft released Deep Zoom (formerly Seadragon) as a «killer feature» of their Silverlight 2 (Beta) launch at Mix08. Following this event, there was quite some back andforth in the blogosphere (damn, I hate that word) about the true innovation behind Microsoft's Deep Zoom.

Today, I don't want to get into the same kind of discussion but rather start a series that will give you a «behind the scenes» of Microsoft's Deep Zoom and similar technologies.

This first part of «Inside Deep Zoom» introduces the main ideas & concepts behind Deep Zoom. In part two, I'll talk about some of the mathematics involved and finally, part three will feature a discussion of the possibilities of this kind of technology and a demo of something you probably haven't seen yet.

Background

As part of my awesome internship at Zoomorama in Paris, I was working on some amazing things (of which you'll hopefully hear soon) and in my spare time, I've decided to have a closer look at Deep Zoom (formerly Seadragon.) This is when I did a lot of research around this topic and where I had the idea for this series in which I wanted to share my knowledge.

Introduction

Let's begin with a quote from Blaise Aguera y Arcas demo of Seadragon at the TED conference[1]:…the only thing that ought to limit the performance of a system like this one is the number of pixels on your screen at any given moment.

What is this supposed to mean? See, I have a 24" screen with a maximum resolution of 1920 x 1200 pixels. Now let's take a photo from my digital camera which shoots at 12 megapixel. The photo's size is typically 3872 x 2592 pixels. When I get the photo onto my computer, I roughly end with something that looks like this:

No matter how I put it, I'll never be able to see the entire 12 megapixel photo at 100% magnification on my 2.3 megapixel screen. Although this might seem obvious, let's take the time and look at it from another angle: With this in mind we don't care anymore if an image has 10 megapixel (that is 10'000'000 pixels) or 10 gigapixel (10'000'000'000 pixels) since the number of pixels we can see at any moment is limited by the resolution of our screen. This again means, looking at a 10 megapixel image and 10 gigapixel image on the same computer screen should have the same performance. The same should hold for looking at the same two images on a mobile device such as the iPhone. However, important to note is that with reference to the quote above we might experience a performance difference between the two devices since they differ in the number of pixels they can display.

So how do we manage to make the performance of displaying image data independent of its resolution? This is where the concept of an image pyramid steps in.

The Image Pyramid

Deep Zoom, or for that matter any other similar technology such asZoomoramaZoomifyGoogle Maps etc., uses something called animage pyramid as a basic building block for displaying large images in an efficient way:

The picture above illustrates the layout of such of an image pyramid. The two purposes of a typical image pyramid are to store an image of any size at many different resolutions (hence the term multi-scale) as well as these different resolutions sliced up in many parts, referred to as tiles.

Because the pyramid stores the original image (redundantly) at different resolutions we can display the resolution that is closest to the one we need and in a case where not the entire image fits on our screen, only the parts of the image (tiles) that are actually visible. Setting the parameter values for our pyramid such as number of levels and tile size allows us to control the required data transfer.

Image pyramids are the result of a space vs. bandwidth trade-off, often found in computer science. The image pyramid obviously has a bigger file size than its single image counterpart (for finding out how much exactly, be sure to come back for part two) but as you see in the illustration below, regarding bandwidth it's much more efficient at displaying high-resolution images where most parts of the image are typically not visible anyway (grey area):

As you can see in the picture above, there is still more data loaded (colored area) than absolutely necessary to display everything that is visible on the screen. This is where the image pyramid parameters I mentioned before come into play: Tile size and number of levels determine the relationship between amount of storage, number of network connections and bandwidth required for displaying high-resolution images.

Next

Well, this was it for part one of Inside Deep Zoom. I hope you enjoyed this short introduction to image pyramids & multi-scale imaging. If you want to find out more, as usual, I've collected some links in theFurther Reading section. Other than that, be sure to come back, as the next part of this series – part two – will discuss the characteristics of the Deep Zoom image pyramid and I will show you some of the mathematics behind it.

Further Reading

References

http://www.gasi.ch/blog/inside-deep-zoom-1/的更多相关文章

  1. http://www.gasi.ch/blog/inside-deep-zoom-2/

    Inside Deep Zoom – Part II: Mathematical Analysis Welcome to part two of Inside Deep Zoom. In part o ...

  2. [WPF系列]-Deep Zoom

        参考 Deep Zoom in Silverlight

  3. openseadragon.js与deep zoom java实现艺术品图片展示

    openseadragon.js 是一款用来做图像缩放的插件,它可以用来做图片展示,做展示的插件很多,也很优秀,但大多数都解决不了图片尺寸过大的问题. 艺术品图像展示就是最简单的例子,展示此类图片一般 ...

  4. A SIMPLE LIBRARY TO BUILD A DEEP ZOOM IMAGE

    My current project requires a lot of work with Deep Zoom images. We recently received some very high ...

  5. 零元学Expression Blend 4 - Chapter 23 Deep Zoom Composer与Deep Zoom功能

    原文:零元学Expression Blend 4 - Chapter 23 Deep Zoom Composer与Deep Zoom功能 最近有机会在工作上用到Deep Zoom这个功能,我就顺便介绍 ...

  6. 论文笔记之:Dueling Network Architectures for Deep Reinforcement Learning

    Dueling Network Architectures for Deep Reinforcement Learning ICML 2016 Best Paper 摘要:本文的贡献点主要是在 DQN ...

  7. What are some good books/papers for learning deep learning?

    What's the most effective way to get started with deep learning?       29 Answers     Yoshua Bengio, ...

  8. Life of an Oracle I/O: tracing logical and physical I/O with systemtap

    https://db-blog.web.cern.ch/blog/luca-canali/2014-12-life-oracle-io-tracing-logical-and-physical-io- ...

  9. RNN and LSTM saliency Predection Scene Label

    http://handong1587.github.io/deep_learning/2015/10/09/rnn-and-lstm.html  //RNN and LSTM http://hando ...

随机推荐

  1. AdminLTE, Color Admin

    AdminLTE, Color Adminhttps://github.com/almasaeed2010/AdminLTE/http://www.seantheme.com/color-admin- ...

  2. 深入理解Linux内核-访问文件

    文件的访问模式:1.规范模式:2.同步模式:3.内存映射模式:4.直接I\O模式5.异步模式: 内存映射模式:1.共享型:在线性区页上的任何写操作都会修改磁盘上的文件:而且这种修改对映射了同一文件的所 ...

  3. 探寻main函数的“标准”写法,以及获取main函数的参数、返回值

    main函数表示法        很多同学在初学C或者C++时,都见过各种各样的main函数表示法: main(){/*...*/} void main(){/*...*/} int main(){/ ...

  4. Android Dialog-Dialog无法充满横屏且下方有间隔

    自定义一个Dialog,写完布局后运行,发现Dialog无法充满屏幕,就像下边这样: 代码大致如下: Dialog dialog = new Dialog(this); dialog.requestW ...

  5. lua 工具类(一)

    -- -- Author: My Name -- Date: 2013-12-16 18:52:11 -- csv解析 -- -- 去掉字符串左空白 local function trim_left( ...

  6. 每日英语:15 places to find inspiration

    If you’re a writer or artist, you understand the power of location when it comes to creativity and f ...

  7. iOS开发如何学习前端(1)

    iOS开发如何学习前端(1) 我为何学前端?因为无聊. 概念 前端大概三大块. HTML CSS JavaScript 基本上每个概念在iOS中都有对应的.HTML请想象成只能拉Autolayout或 ...

  8. 【转】Unity3D中脚本的执行顺序和编译顺序

    支持原文,原文请戳: Unity3D中脚本的执行顺序和编译顺序 在Unity中可以同时创建很多脚本,并且可以分别绑定到不同的游戏对象上,它们各自都在自己的生命周期中运行.与脚本有关的也就是编译和执行啦 ...

  9. nexus maven私服搭建

    1.在服务器上安装jdk 2.下载 nexus-3.14.0-04-unix.tar.gz,并上传到服务器/opt目录 3.解压 tar -zxvf nexus-3.14.0-04-unix.tar. ...

  10. nginx中配置proxy_pass

    在nginx中配置proxy_pass时,当在后面的url加上了/,相当于是绝对根路径,则nginx不会把location中匹配的路径部分代理走;如果没有/,则会把匹配的路径部分也给代理走. 下面四种 ...