Wednesday, November 5, 2008

Scientist Create Mouse Out Of Frozen Cells

TOKYO: Japanese scientists said Tuesday they had created a mouse from a dead cell frozen for 16 years, taking a step in the long impossible dream of bringing back extinct animals such as mammoths.

Scientists at the government-backed research institute Riken used the dead cell of a mouse that had been preserved at minus 20 degrees Celsius (minus 4 degrees Fahrenheit)—a temperature similar to frozen ground.

The scientists hope that the first-of-a-kind research would pave the way to restore extinct animals such as the mammoth.

The findings were published in the Proceedings of the National Academy of Sciences in the United States.

The scientists extracted a cell nucleus from an organ of the dead mouse and planted it into an egg of another mouse which was alive, leading to the birth of the cloned mouse, the researchers said.

“The newly developed technology of nucleus transfer greatly improved the possibility of reviving extinct animals,” the research team led by Teruhiko Wakayama said in a statement.

The cloned mouse was able to reproduce with a female mouse, it added.

Wednesday, October 22, 2008

Digital Technology Bringing a 3-D Revival to Theaters

Get out the goofy glasses.

As Hollywood studios and national movie chains search for new ways to get consumers into theaters, they’re turning back to 3-D for their future.

Studios have announced plans to release a record 25 or more 3-D titles over the next two years, including a remake of the hit "The Nightmare Before Christmas" that premieres Sunday(Oct. 19).

Movie theaters, meanwhile, are rolling out 3-D-capable screens in record numbers.

This month, a consortium of theater owners -- including the AMC, Cinemark and Regal chains -- announced an agreement with five Hollywood studios to spend nearly $ 1 billion to install at least 14,000 digital projection systems in theaters across North America over the next several years. That would nearly triple the number of digital theaters in North America.

Despite financing concerns amid the meltdown of the banking system, backers say some of the first digital screens from the agreement could open the first quarter of next year.

"We’re going to push forward as quickly as possible," said Rich Manzione, vice president of strategic development for the Digital Cinema Implementation Partners consortium.

Digital movie projection systems are key to the newest 3-D movie technology. Today’s digital cinema projectors -- the vast majority of which are made by Dallas-based Texas Instruments Inc. -- create 3-D images with a single machine, while old 3-D technology required two projectors running the same film simultaneously.

The new projectors send one image intended for the viewer’s left eye and one for the right. Special polarized eyeglasses prevent the right eye from seeing the left-eye movie, and vice versa. The brain combines the two projections into a single 3-D image.

The reason why Hollywood is hot for 3-D is simple. With big-screen televisions, DVDs, high-definition and pay-per-view movies now commonplace, fewer consumers are going to theaters.

"This gives (theater owners) the chance to provide something that consumers can’t get at home," Manzione said.

It also lets them charge higher prices and reap higher profits. Theaters typically charge a few dollars more for 3-D productions than they do for 2-D productions.

Recent 3-D projects have been well received.

"Journey to the Center of the Earth," billed as the first live-action feature filmed in the latest 3-D technology, opened at No. 3 at the box office in July.

Most notably, theaters that showed the 3-D version of the film sold about three times as many tickets as theaters that showed the film in 2-D.

A 3-D concert video from Walt Disney Studios, "Hannah Montana & Miley Cyrus: Best of Both Worlds" debuted at

No. 1 when it premiered in February.

The top-grossing 3D film of all time came from Austin, Texas filmmaker Robert Rodriguez. His "Spy Kids 3-D," released in July 2003, grossed an estimated $ 197 million worldwide. It cost $ 38 million to make. (NYT)

Monday, October 20, 2008

Dirac PRO

What is Dirac PRO?

Dirac Pro is a version of the Dirac family of video compression tools, optimised for professional production and archiving applications, especially where the emphasis is on quality and low latency (i.e. we avoid the long delay inherent in some of the implementations we use for broadcast or internet applications).

Typical production processes require lossless or virtually lossless compression with low latency. Dirac has been streamlined to meet these requirements.

Dirac Pro is designed for simplicity, efficiency and speed, and intended for high quality applications with lower compression ratios.

Like Dirac it is an open technology, which will work on all the major operating systems, such as Windows, Macintosh or Linux. As it is an Open system, it is easy to import it onto a wide range of hardware, from specialist signal processors to application-specific LSI circuits.

Dirac Pro is capable of being used in post production at resolutions up to 4K with a base layer plus enhancement system, allowing very high quality proxy workflows.

Typical applications may be

  • lossless or visually lossless compression for archives,
  • mezzanine compression for re-use of existing equipment, such as 1080P 50Hz carried in a 1080I 25 Hz channel
  • and low latency compression for live video links.

We can use SD infrastructure to route HD signals by compressing 1.5 GBit/s HDSDI links into 270 MBit/s SDI or SDTI. Likewise, compressing HDSDI signals to be carried on Gigabit Ethernet (at circa 600 MBit/s) would also allow HD working on cheap network infrastructure. DiracPRO introduces minimal artefacts at these levels of compression.

Features

Dirac Pro will support the following techical features, required by professional end-users:

  • Intra-frame only (forward and backward prediction modes are also available if required)
  • 10 bit 4:2:2
  • No subsampling
  • Lossless or Visually lossless compression
  • Low latency on encode/decode
  • Robust over multiple passes
  • Ease of transport (can use a range of transport standards including MPEG-2 and SDTI)
  • Low complexity for decoding
  • Open Specification
  • Multiple vendor
  • Support for multiple HD image formats and frame rates.

Both Dirac and Dirac Pro are Open Technologies, and the Dirac software source code is licensed under the Mozilla Public License Version 1.1.

The technology of Dirac PRO

The main difference between Dirac and DiracPRO is in the treatment of the final process in compression - the arithmetic coding. Arithmetic coding is processing intensive and introduces delay. These are features that are undesirable in high end production work. The arithmetic coding produces most efficiency savings with highly compressed material. There is little benefit to be gained with the low compression used in top-end production. DiracPRO therefore omits the arithmetic coding.

Applications

There are two specialised applications we have identified as prime targets for DiracPRO.

The first is low-delay compression for live links. This uses a special choice of motion compensation options, avoiding the delay that the most powerful option provides. With low delay, the system can be used for wireless links, within or outside the studio.

The second specialised application is a low compression option designed to deliver nearly lossless coding. This lets us deliver 1080 progressive formats over infrastructure designed for 1080 interlaced.

High-end production is rapidly migrating to high quality 1080 P50/60. But this format requires a higher data rate than the existing 1.5 GBit/s HDSDI infrastructure. The DiracPRO profile supports the transport of these high quality images over conventional high definition infrastructure. This is an evolution of work on Mezzanine coding (aka SMPTE VC-2) which originally used DCT as the transform. Now we are embracing the concept within DiracPRO and using wavelets as the prime compression tool. It is also suitable for quality coding of video for 270 MBit/s links.

Further applications may be lossless or visually lossless compression for archives or mezzanine compression for reuse of existing equipment, such as 1080P 50 Hz carried in a 1080I 25 Hz channel.

For existing standard definition links, compressing 1.5 GBit/s HDSDI links into 270 MBit/s SDI or SDTI would facilitate the use of standard infrastructure for routing HD signals. Likewise, compressing HDSDI signals to be carried on Gigabit Ethernet (at circa 600 MBit/s) would also allow high definition working on a cheaper network infrastructure. DiracPRO gives excellent quality at

Thursday, October 16, 2008

Dirac - New Video Compression Technology

Dirac is a prototype algorithm for the encoding and decoding of raw video. It was presented by the BBC in January 2004 as the basis of a new codec for the transmission of video over the Internet. The codec was finalised on January 21, 2008, and further developments will only be bug fixes and constraints[1]. The immediate aim is to be able to encode standard digital PAL TV definition (720 x 576i pixels per frame at 25 frames per second) in real time; the reference implementation can encode around 17 frames per second on a 3 GHz PC but extensive optimisation is planned. This implementation is written in C++ and was released at SourceForge on 11 March 2004.

An intra-frame-only subset of the Dirac specification, known as Dirac Pro, is being considered for standardisation as SMPTE VC-2[2].

The codec is named in honour of the British scientist Paul Dirac.


[edit] Technology

Similar to common video codecs such as the ISO/IEC Moving Picture Experts Group (MPEG)'s MPEG-4 Part 2 or Microsoft's WMV 7, it can compress any size of picture from low-resolution QCIF (176x144 pixels) to HDTV (1920x1080) and beyond. However, it promises significant savings in bandwidth and improvements in quality over these codecs, by some claims even superior to those promised by the latest generation of codecs such as H.264/MPEG-4 AVC or SMPTE's VC-1 (which is based on Microsoft's WMV 9). Dirac's implementors make the preliminary claim of "a two-fold reduction in bit rate over MPEG-2 for high definition video"[1], an estimate which would put the design in about the same class of compression capability as the latest standardization efforts of H.264/MPEG-4 AVC and VC-1. MPEG-2 is the previous generation video codec used in the standard DVD format today.

Dirac employs wavelet compression, instead of the discrete cosine transforms used in most older codecs (such as H.264/MPEG-4 AVC or SMPTE's VC-1). Dirac is one of several projects attempting to apply wavelets to video compression. Others include Rududu [2], Snow and Tarkin. Wavelet compression has already proven its viability in the JPEG 2000 compression standard for photographic images.

Monday, October 13, 2008

More About Modulation Error Ratio

Modulation error ratio is digital complex baseband SNR - in fact, in the data world, the terms "SNR" and "MER" are often used interchangeably, adding to the confusion about SNR, especially considering that, as mentioned previously, in the telecommunications world, the terms "CNR" and "SNR" are often used interchangeably.
Why use MER to characterize a data signal? It is a direct measure of modulation quality and has linkage to bit error rate. Modulation error ratio is normally expressed in decibels, so it is a measurement that is familiar to cable engineers and technicians. It is a useful metric with which to gauge the end-to-end health of a network, although by itself, MER provides little insight about the type of impairments that exist.9
Figure 13 illustrates a 16-QAM constellation. A perfect, unimpaired 16-QAM digitally modulated signal would have all of its symbols land at exactly the same 16 points on the constellation over time. Real-world impairments cause most of the symbol landing points to be spread out somewhat from the ideal symbol landing points. Figure 13 shows the vector for a target symbol - the ideal symbol we want to transmit. Because of one or more impairments, the transmitted symbol vector (or received symbol vector) is a little different than ideal. Modulation error is the vector difference between the ideal target symbol vector and the transmitted symbol vector. That is,
[Eq. 19]

Figure 13. Modulation Error Is a Measure of Modulation Quality. (Source: Hewlett-Packard)

If a constellation diagram is used to plot the landing points of a given symbol over time, the resulting display forms a small "cloud" of symbol landing points rather than a single point. Modulation error ratio is the ratio of average symbol power to average error power (refer to Figure 14):
MER(dB) = 10log(Average symbol power ÷ Average error power) [Eq. 20]
In the case of MER, the higher the number, the better.

Figure 14. Modulation Error Ratio Is the Ratio of Average Symbol Power to Average Error Power. (Source: Hewlett-Packard)

Mathematically, a more precise definition of MER (in decibels) follows:
[Eq. 21]
where I and Q are the real (in-phase) and imaginary (quadrature) parts of each sampled ideal target symbol vector, and are the real (in-phase) and imaginary (quadrature) parts of each modulation error vector. This definition assumes that a long enough sample is taken so that all the constellation symbols are equally likely to occur.
In effect, MER is a measure of how "fuzzy" the symbol points of a constellation are. Table 4 summarizes the approximate ES/N0 range that will support valid MER measurements for various DOCSIS modulation constellations. The two values in the table for the lower threshold correspond to ideal uncoded symbol error rate (SER) = 10-2 and 10-3, respectively. The upper threshold is a practical limit based on receiver implementation loss. Outside the range between the lower and upper thresholds, the MER measurement is likely to be unreliable. The threshold values depend on receiver implementation. Some commercial QAM analyzers may have values of the lower ES/N0 threshold 2 to 3 dB higher than those shown in the table.

Table 4. Valid MER Measurement Range

Modulation Format

Lower ES/N0 Threshold

Upper ES/N0 Threshold

QPSK

7-10 dB

40-45 dB

16 QAM

15-18 dB

40-45 dB

64 QAM

22-24 dB

40-45 dB

256 QAM

28-30 dB

40-45 dB

Good engineering practice suggests keeping RxMER in an operational system at least 3 to 6 dB or more above the lower ES/N0 threshold.10 This guideline will accommodate temperature-related signal-level variations in the coaxial plant, amplifier, and optoelectronics misalignment; test equipment calibration and absolute amplitude accuracy; and similar factors that can affect operating headroom. The lower ES/N0 threshold can be thought of as an "MER failure threshold" of sorts. That is, when unequalized RxMER approaches the lower ES/N0 threshold, the channel may become unusable with the current modulation. Possible workarounds include switching to a lower order of modulation, using adaptive equalization, or identifying and repairing what is causing the low RxMER in the first place.


Wednesday, October 1, 2008

DIY S-video to Composite Video Adapter

Have you been in a situation when your video output device has only S-video output but your TV monitor’s input accepts only composite video input? Then this simple adapter can be very handy. This circuit works with both PAL and NTSC standards.

Short pins 1 and 2 (Y ground and C ground) of the S-video connector and then connect to composite video ground in the RCA connector. Also short pin 3 (luminance) of the S-video connector to the hot pin of the RCA connector. Insert a 470pF capacitor between pin 4 (chrominance) and RCA hot pin. The voltage rating of capacitor can be 10V or more.

The circuit operation is not ideal because impedances are not matched exactly right. But the picture quality you will get may be good enough for emergency situations.



Monday, September 29, 2008

Crystal Vision Supports BBC Studios' HD Upgrade Of Studio Four


165 Crystal Vision interface boards have been used by BBC Studios, part of BBC Resources Ltd., a wholly owned commercial subsidiary of the BBC, for the HD upgrade of its Studio Four at BBC Television Centre in London.

BBC Studios is investing nearly two million pounds in HD cameras, lenses, vision and monitoring equipment to support its entertainment production customers. Studio Four comprises 8,000 square feet and is home to "A Question of Sport", "The Jonathan Ross Show" and ITV1's "The Alan Titchmarsh Show". The new Studio Four is designed to work in any of the current HD and SD formats, can produce HD and SD simultaneously and includes Dolby E encoding.

The up and down conversion is provided by 20 of Crystal Vision's Up-and-down up/down/cross converters and 13 of the Q-Down123 short-delay down converters. The Up-and-downs will be used to convert existing legacy SD equipment to HD, with ten of them wrapped around the main matrix to provide additional feeds as required. The Q-Downs will provide the SD feeds when the studio is in HD mode. Explained Danny Popkin, Technical Development Manager for BBC Studios: "Much of our output will be simulcast in SD and HD, so the short delay of the Q-Down is ideal for this use."

Distribution of the HD signals will come from 17 of the HDDA105N and HDDA111N distribution amplifiers. Three SYN HD synchronisers will be used for synchronising internal sources that either do not have locking feeds or are unstable. BBC Studios will also use five of the new SYNNER-E HD multi-functional synchronisers - which include an embedder, de-embedder, tracking audio delay, audio processor and special Dolby E processing - for synchronising external sources to the studio and de-embedding the audio to AES. Two CoCo HD colour correctors and legalisers will be used to feed in-vision monitors or colour correct incoming pictures.

For the audio, BBC Studios will use 26 of the TANDEM HD-21 embedders/de-embedders. Explained Popkin: "To simplify VTR plugging, all the studio recording is done as embedded AES so there is an embedder/de-embedder either side of any recording device".

The studio matrix is either SD or HD, and PAL or Y/C sources and displays will therefore use the ADDEC-210, ALLDAC and MON210 converters to encode or decode PAL. Other Standard Definition boards used in the installation include VDA110R and DDA108 distribution amplifiers, SYN102 and SYNNER-E synchronisers and the TANDEM-200 audio embedder/de-embedder.

The boards are mainly housed in Indigo 4SE 4U frames - selected because they offer the highest density of boards (holding up to 24) with BBC Studios having limited space for rack equipment. Control comes from the Statesman PC software, with BBC Studios using the Signal Path add-on - which provides a view of the system based on the way the boards are used rather than their rack location - to graphically monitor its signal paths, because it simplifies fault finding in a complex signal chain.

The equipment was ordered and installed by Dega Broadcast Systems, with the studio going on air in early September.

Studio Four is the third studio that BBC Studios has upgraded to HD. Explained Popkin: "There is an increasing demand for HD content and this investment will ensure BBC Studios continues to fulfil the requirements of its customers, whose creative and editorial visions are increasingly in HD. It will also help the BBC achieve its HD aspirations." Crystal Vision was also selected for the HD upgrade of Studio One in 2006, which included 72 of the Up-and-down up/down/cross converters.

Saturday, September 27, 2008

Cool Do-It-Yourself Website

HACK N MOD (http//hacknmod.com) is a cool website where you can find endless list of do-it-yourself (DIY) projects. It is a collection of projects both from amateurs and professionals and you yourself can even contribute if you have a cool idea.

Friday, September 26, 2008

TV screen resolution

Nearly all of today's HDTVs are "fixed-pixel displays," meaning their screens use a fixed number of pixels to produce a picture. That includes flat-panel LCD and plasma TVs, as well as front- and rear-projection types that use DLP, LCD, or LCoS technology.

All of these fixed-pixel displays have a native resolution that tells you the maximum level of image detail a TV can produce. Two of the most common resolutions are 768p and 1080p, though you may also see 720p.

You may see these same resolutions listed as "1366 x 768 pixels" or "1920 x 1080 pixels." That tells you precisely how many pixels the screen actually has: the first number is the horizontal resolution and the second number is the vertical resolution. Multiplying these two numbers gives you a screen's total pixel count. As an example, 1920 x 1080 = 2,073,600 pixels, which is usually simplified to "2 million." By comparison, 1366 x 768 = 1,049,088 pixels — slightly over one million.

Comparison of three common screen resolutions

Friday, September 19, 2008

Japan's NHK-STRL New Technology

Take a look on Japan's NHK Science & Technical Research Laboratories new Technology that they are developing. Here is the URL;

http://www.nhk.or.jp/strl/publica/rd/rd110/rd110.html

Friday, September 12, 2008

Some facts about CCD and CMOS image sensors in a digital camera

  • CCD sensors, create high-quality, low-noise images. CMOS sensors, traditionally, are more susceptible to noise.
  • Because each pixel on a CMOS sensor has several transistors located next to it, the light sensitivity of a CMOS chip tends to be lower. Many of the photons hitting the chip hit the transistors instead of the photodiode.
  • CMOS traditionally consumes little power. Implementing a sensor in CMOS yields a low-power sensor.
  • CCDs use a process that consumes lots of power. CCDs consume as much as 100 times more power than an equivalent CMOS sensor.
  • CMOS chips can be fabricated on just about any standard silicon production line, so they tend to be extremely inexpensive compared to CCD sensors.
  • CCD sensors have been mass produced for a longer period of time, so they are more mature. They tend to have higher quality and more pixels.

Sunday, September 7, 2008

Computer Memory

Computer memory refers to any recording media that retains electronic data used for computing. Memory can be clasiffied into the following:

1. Temporary or primary - inludes the RAM
2. Permanent or secondary - includes the hard disk, ROM/BIOS, optical and magnetic drives
3. Off-line or tertiary - includes memory media like USB memory, memory sticks,
CompactFlash, SmartMedia, etc.

Thursday, September 4, 2008

A better understanding of Interlace


This animation demonstrates the interline twitter effect. The two interlaced images use half the bandwidth of the progressive one. The interlaced scan (second from left) precisely duplicates the pixels of the progressive image (far left), but interlace causes details to twitter. Real interlaced video blurs such details to prevent twitter, as seen in the third image from the left, but such softening (or anti-aliasing) comes at the cost of resolution. A line doubler could never restore the third image to the full resolution of the progressive image.

Note – Because the frame rate has been slowed down, you will notice additional flicker in simulated interlaced portions of this image.

Wednesday, September 3, 2008

The Ultimate Guide to Anamorphic

http://tinyurl.com/9968

Anamorphic widescreen DVD is all about giving you the most lines of picture resolution (and thus quality), while still allowing you to watch widescreen movies as they were meant to be seen.












Non-anamorphic video as it ap
pears on a Digital 16x9 TV. The gray bars are generated by the TV to fill in the unused portions of the screen. Using the TV's "zoom" mode, you can magnify the image to fill the screen electronically, but at the cost of degrading the image quality significantly.












Anamorphic video as it appears on a Digital 16x9 TV. The "squished" image recorded on the disc (seen at top) is sent directly to the TV, which stretches the video signal horizontally until the correct aspect ratio is achieved. As you can see, the image fills the frame, while retaining its full vertical resolution. The picture quality is stunning.

In essence, as I understand, (I could be wrong, you know), anamorphic is done for the purpose of preserving the whole image as it was produced so that it can be viewed in all kinds of aspect ratios.

For a detailed article on this pls click the tinyurl link.

Cheers,
t2riki

Monday, September 1, 2008

Digital TV Standards Comparison


Please click on table to view larger image.

Suggest topics to discuss

Mga ka-brew, please list (sa comments) topics you want to be discussed in this blog site. Everybody is welcome to post, maiikli lang, English or Tagalog. You can even copy and paste from other sites, wala namang copyright issues dahil blogs lang naman ang ginagawa natin.

Friday, August 29, 2008

Super Hi-Vision

NHK of Japan started development of Super Hi-Vision technology and showed prototype in 2002. Its video resolution is far better than HDTV at 7680 x 4320 or around 16x greater. Audio portion is 22.2 as compared to only 5.1 of HDTV (typical). Because of the huge amount of data involved in this format, along came the requirement for cameras, transmission technology and display media that can handle such content. Present compression technology can reduce data stream down to 180Mbps only and there is no transmission medium yet that can accommodate such bandwidth. It will take some time before commercial deployment of this format can be implemented.

Tuesday, August 26, 2008

Why does the HVX200 capture 1280x1080 rather than 1920x1080?

Probably because Panasonic couldn’t source imaging chips that would capture the native image, pixel for pixel, at the necessary price or size.
Want proof? Figure 1 shows a video frame exported from Premiere Pro at its native resolution of 1280x1080. As you can see, the wheel is oval. These are the pixels actually captured by the Panasonic.












In Figure 2, the frame was expanded by 150 percent to its intended display resolution of 1920x1080, and the wheel is round. This is the image you would see in your video editor or on your HDTV, which both know to expand DVCPRO HD video out to 150 percent of horizontal resolution.

For details:













http://digitalcontentproducer.com/hdhdv/depth/know_your_formats_0414/index.html

Monday, August 25, 2008

Sample post

Sample post from Diego din.

Digital TV Formats

High-Definition Television (HDTV) is a digital television system with greater resolution than analog television systems (NTSC, PAL-SECAM) and standard-definition television (SDTV). Popular HDTV and SDTV formats are the following:



The above three formats are also available in different frame rates as follows - 24fps, 25fps, 50fps and 60fps.

Welcome note from Brewer Diego

Welcome, coffee lovers who also have a passion for new tech ideas! This blog was created to serve as a brewing point for new as well as old tech ideas. Share, learn...but don't forget to drink that cup of coffee!