Blogadda

Blogadda
Visit BlogAdda.com to discover Indian blogs

Monday, December 27, 2010

Packet and frame difference

A packet is the PDU - protocol delivery unit at layer 3 (network layer) of the networking OSI model. You may have heard them referred to as ip packets. This is the organization of your data at layer 3

A frame is the PDU of layer 2 (data link) of the OSI model.
At layer 2, packets get encapsulated into frames so that they can be transferred over different media to the end destination. Still the same data...it could be split up differently due to varying window size, but there are a few different details added into the frame header.


a packet is a formatted block of data carried by a packet mode computer network. Computer communications links that do not support packets, such as traditional point-to-point telecommunications links, simply transmit data as a series of bytes, characters, or bits alone. When data is formatted into packets, the bitrate of the communication medium can better be shared among users than if the network would have been circuit switched. By using packet switched networking it is also harder to guarantee a lowest possible bitrate.

In computer networking, a frame is a data packet of fixed or variable length which has been encoded by a data link layer communications protocol for digital transmission over a node-to-node link. Each frame consists of a header frame synchronization and perhaps bit synchronization, payload (useful information, or a packet at higher protocol layer) and trailer. Examples are Ethernet frames and Point-to-point protocol (PPP) frames.


A packet is a group of bits of data. A frame is a place to put a packet.
You can't store a frame -- it is mainly of interest to low level network
hardware. A frame is typically an interval of time. Sometimes, it is the
equivalent region along the length of a wire.

If you look at a very early use of the term "frame" this is clear. In the
RS-232 asynchronous serial line protocol, a packet (though not called
that) is typically 8 bits that represent a byte of data being sent from
one node to another. But on the wire, you have to add a start bit in
front and a stop bit in back. Adding those bits is called "framing." You
will never see the 10 bit sequence stored anywhere - the bits are added on
the fly by hardware. The start and stop bits are not part of any packet.
The time between when the receiver sees the start bit and when the
receiver sees to stop bit, not the 10 bits, is a frame.

In T1, a frame is defined by points in time alone -- no extra bits
involved at all. It's not uncommon on T1 voice lines for the packets to
end up distributed incorrectly among the frames (instead of one packet to
each frame), thus causing individual voice circuits to get messed up.

Wednesday, November 17, 2010

Overview of MPEG-4 encoding format

Overview of MPEG-4 encoding format:
• MPEG-4 is a standard used to compress audio and visual data. The MPEG-4 standard is generally used for streaming media and CD distribution, video conversation, and broadcast television. MPEG-4 incorporates many features of MPEG-1, MPEG-2 and other related standards.
• MPEG-4 is still a developing standard and is divided into several parts. The standard includes the concept of “profiles” and “levels,” allowing a specific set of capabilities to be defined in a manner appropriate for a subset of applications.
• MPEG-4 is able to crunch massive video files into pieces small enough to send over mobile networks. While these blurry pictures are unlikely to persuade millions of people to upgrade immediately their mobile phones but holds enough promise for future.
• Perhaps more important are the interactive features that MPEG-4 offers. The video functions almost like a Web page, but allowing people to interact with the picture on the screen or to manipulate individual elements in real time.
• MPEG-4 also allows other types of content to be bundled into a file, such as video or images. These files require special software to play.
• MPEG-4 would allow the interactivity of the video which may open potential to do far more than just point and click at links on the screen. Individual elements of the video like a character, a ball in a sporting event, a rocket ship in a science-fiction epic can exist in a separate layer from the rest of the video. This could allow viewers to interact with these elements somehow, even changing the direction of the story.
Overview of H.264 Coding Format:
• Also known as MPEG-4 Part 10 or AVC (for Advanced Video Coding).
• Poised to become the next standard for format of convergence in the digital video industry regardless of the video playback platform . Big Internet players like Google/YouTube, Adobe, and Apple iTunes are all backing this cross-platform format.
• H.264 standard is jointly maintained with MPEG so that they have identical technical content.
• The intention behind H.264/AVC project was to provide good video quality at substantially lower bit rates than previous standards. An additional goal was to provide enough flexibility to allow the standard to be applied to a wide variety of applications on a wide variety of networks and systems.
• H.264/AVC/MPEG-4 Part 10 contains Multi-picture inter-picture prediction including the features like using previously-encoded pictures as references in a more flexible way than in past standards, allowing up to 32 reference pictures to be used sometimes.
• H.264 provides quarter-pixel precision for motion compensation enables very precise description of the displacements of moving areas. For chroma the resolution is typically halved both vertically and horizontally, therefore the motion compensation of chroma uses one-eighth chroma pixel grid units.
• H.264 provides six-tap filtering for derivation of half-pel luma sample predictions, to lessen the aliasing and eventually provide sharper images.
• H.264 provides flexible interlaced-scan video coding features, includes Macro block-adaptive frame-field (MBAFF) coding, using a macroblock pair structure for pictures coded as frames, allowing 16×16 macroblocks in field mode compared with 16×8 half-macroblocks in MPEG-2. An enhanced lossless macroblock representation mode allowing perfect representation of specific regions while ordinarily using substantially fewer bits than the PCM mode. Picture-adaptive frame-field coding, allowing a freely-selected mixture of pictures coded as MBAFF frames with pictures coded as individual single fields (half frames) of interlaced video.
• H.264 is more attractive for video network delivery and for delivery of HD, high definition video.
• H.264 or AVC is an open format with published specification and is available for anyone to implement.

Tuesday, June 1, 2010

Mark Twain...

Work like you don’t need the money.
Love like you’ve never been hurt.
Sing like no-one’s listening.
Dance like no-one’s watching.
Live like there’s no tomorrow.

--mark twain

Really inspiring..

trying my best to follow his magnificent lines :D :D

Tuesday, March 2, 2010

Ji Bhar Ke Roye

Ji Bhar Ke Roye To Karrar paya,..
Is Daur Main Kisne Sacha Pyaar Paya,..
Zindgi Guzar Rahi Hai Imtihano Ke Daur Se,..
Ek Zakham Bhara Nahi Ke Dusra Taiyaar Paya…!.

Thursday, February 4, 2010

AS QUESTIONS

url loader adn url request
how to communicate a as class and a asp page and a php page on online server.
MVC framework
proxy and prototype classes and their role in mvc architecture.
resizing the nested movieclips using code
get name for the child mc using the outer mc
content loader info object
singleton
observer class
how many classes can be in one as file
how many classes shud be in one package.
how we call the library onjects of loaded swf in another as file.
how many type of classes r there related to sound.
use of sound mixer class.
how we implement scorm standards in a xml application.
dispatch events
xml socket class

Monday, January 18, 2010

Two-way data binding-- By Eugene Kardash

Two-way data binding, aka bidirectional data binding, refers to two components acting as the source object for the destination properties of each other. I.e. if you have a variable defined, and a, let say, input field with this variable bound to text property of the field, when you change the field's text the variable is changed, and when you change the variable, the field's text property is changed. In Flex 3 this is possible using a combination of curly braces and statement. Below is the simple example illustrating this:
01.xml version="1.0" encoding="utf-8"?>
02.<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" minWidth="1024" minHeight="768">
03.
04. <mx:String id="myName"/>
05. <mx:Binding source="nameInput.text" destination="myName" />
06.
07. <mx:TextInput id="nameInput" x="30" y="28"/>
08. <mx:Label x="31" y="68" text="{myName}"/>
09.
10.mx:Application>


In Flex 4 this can be archived much easier, inline using declaration: @{bindable_property}. Code example:


01.xml version="1.0" encoding="utf-8"?>
02.<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
04. xmlns:mx="library://ns.adobe.com/flex/halo" minWidth="1024" minHeight="768">
05. <fx:Declarations>
06. <fx:String id="myName"/>
07. fx:Declarations>
08.
09. <s:VGroup width="200" height="67" x="37" y="38">
10. <s:TextInput id="nameInput" text="@{myName}"/>
11. <mx:Spacer height="10"/>
12. <s:SimpleText text="{myName}"/>
13. s:VGroup>
14.
15.s:Application>

Monday, January 4, 2010

Rahat Indori Sahab

dhuup samandar chehraa hai
manzar kitnaa gahraa hai

dekh chale ham saaraa shah’r
acchaa Khaasaa sehraa hai

merii aaNkhoN se dekho
vo chehraa hii chehraa hai

gandum sab zahriile haiN
zaahir khet sun-haraa hai

rone par hii qaid nahiiN
haNsne par bhii pahraa hai

apnii bayaaz-e-hastii meN
har misraa be-bahraa hai

dariyaa dariyaa chhaan chuke
saahil sab se gahraa hai

donoN dost haiN barsoN se
maiN guuNgaa, vo bahraa hai

Saturday, January 2, 2010

LETTER

LETTER

A Desi chap was deeply in love with a pretty girl, whom he wanted. But he did not have the courage to talk to her in person.

So he decided to go alone and with the help of a dictionary, he wrote a letter of proposal to her.


HE WROTE :

Most worthy of your estimation after a long consideration and much mediation, I have a strong indication to become your relation.


As to my educational qualification, it is no exaggeration or fabrication, that I have passed my matriculation examination (no doubt without any hesitation and very little preparation).

What do you say to the solemnization of our marriage celebration according to the glorification of modern civilization and with a view to the expansion of the population of present generation. On your approbation of the application,

I shall make preparation to improve my situation, and if such obligation is worthy of consideration it will be our argumentation of the joy and exaltation of our joint dissimilation.

Thanking you in anticipation and with devotion; To remain victim of your fascination.

SHE ANSWERED :

Dear Mr. Victim of my fascination,

Congratulation for your lengthy narration of course full of affection aimed at an affiliation for a combination which on examination I find is a fine presentation of your ambition.

You have passed your matriculation with little preparation, what about my graduation after a long botheration, so improve situation in education and make an application by acquisition of post graduation and minimumqualification for the convocation and before taking your photo for circulation undergo beautification.

Further strict observation of the following conditions is the regulation for the determination of our relation.

1. Consultation of my parents before approaching for my connection.

2. Communication of your confirmation that you are not a victim of any fascination and,

3. Procreation must not be your recreation.

In anticipation of a solid action instead of continuation of paper conversation.

I Remain, unaffected by your affection.