Monday, 3 December 2007


fring™ is a free mobile VoIP application that utilizes free WiFi or your mobile internet data plan to make free mobile internet calls and live chat (IM) to other ‘fringsters’ and PC-based services including Skype®, Google Talk™, ICQ, MSN® Messenger and Twitter.

fring offers benefits previously enjoyed only on a PC and empowers you to truly go mobile. You can make cheap international and local mobile internet calls, see who’s online before dialing with contact availability icons, live chat instead of SMS, chat in multiple conversations simultaneously, view conversation history and more! Plus, you can take all your fring, Skype, Google Talk, ICQ, MSN Messenger and Twitter buddies mobile and view them alongside your regular phone contacts, from one integrated and searchable contact list.

As fring automatically roams between WiFi and 3G networks, you can effortlessly login to recognized WiFi hotspots, smoothly gaining access to the best available network for optimal call quality and savings wherever you are.

fring bypasses traditional mobile voice and SMS text messaging services by utilizing the mobile handset’s native internet connection. fring does not require any dedicated hardware or airtime and works with phones purchased through any mobile operator.

Thursday, 22 November 2007

Verizon chooses Nortel for new European optical network

LONDON -- Nortel will deploy a new ultra long haul optical network across Europe to enable Verizon to meet the increasing network demands of its service provider and large enterprise customers and deliver high-bandwidth services like video and online gaming.

The multi-million dollar deal includes a Nortel Metro Ethernet Networking solution that will support Verizon’s planned introduction of 10G services and the emerging 40G services that may be introduced in the future.

“Verizon has taken a bold, forward-looking approach to building out its network,” said Philippe Morin, president, Metro Ethernet Networks, Nortel. “It provides the ability to deliver the bandwidth capacity that service providers and businesses require today and the critical ability to evolve seamlessly to 40G when needed.”

“Of course the true beneficiaries are the end users who will gain unrestricted access to new, high-bandwidth services and applications, such as video, advanced business services and multimedia communications. This development also supports the coming megatrend of Hyperconnectivity, where every device that should be connected to the network, will be connected.”

The new network will carry more than 80% of Verizon’s European network traffic and will be deployed across multiple countries, including the United Kingdom, France, Belgium, Germany and the Netherlands.

In addition to providing increased bandwidth, the new next-generation, ultra-long haul optical network allows Verizon to reduce costs as it simplifies network operations and requires less equipment.

Nortel is the sole supplier of the Adaptive, All Optical Solution and Nortel Global Services is assisting with the deployment to provide a turnkey solution for Verizon that will include integration into existing management systems and Network Operation Center facilities.

The new Verizon optical network is based on the Optical Multiservice Edge 6500, a next-generation optical convergence platform. The solution also features Nortel's unique electronic Dynamically Compensating Optics (eDCO), which simplifies networking by extending 40G wavelengths over thousands of kilometers without requiring dispersion compensation modules, greatly simplifying the network.

Verizon is also using the Common Photonic Layer (CPL) which will enable migration to a more agile, adaptive, all optical intelligent network. Nortel’s Reconfigurable Optical Add/Drop Multiplexing (ROADM) technology is also included in the solution to make it easy to add and route new services, resulting in a more cost-effective, reliable infrastructure. In addition, the Nortel Optical Network Manager will provide the required operations, administration and management for the new network.

Sunday, 18 November 2007

Pitch Spelling Algorithms

Pitch Spelling Algorithms
Pitch spelling algorithm is the process of assigning appropriate pitch names that are consistent with the key context to numeric representations of pitch, such as, MIDI or pitch class numbers.

They introduce two new notions of approximate matching with application in computer assisted music analysis and also present algorithms for each notion of approximation: for approximate string matching and for computing approximate squares.

-Chew and Chen
The algorithm is spiral array model.

Sunday, 11 November 2007

Introduction to Acquisition Process

1. Serial search
-The algorithm tests the cells one by one.
-The cell corresponds to the correct code-phase and Doppler of the acquired signal within a given resolution.
2. Parallel search
In this architecture the test statistics are generated over the entire search.
3. Hybrid
With the hybrid search strategy the whole search is divided into a number of test regions each consisting of a few cells.

Introduction to Fourier Optics

Aperture effect
When the incident wave goes through an aperture, the observed field is the combination. If a distant object is viewed through a small circular aperture that is placed close to the eye, there is an apparent "shadowing" of the center of the viewed image, while the inner perimeter of the aperture is bright.

What is Fourier Optics?(quoted from )
Fourier optics describes the propagation of light using Fourier analysis. It can be used to describe Fresnel and Fraunhofer diffraction. The Fraunhofer diffraction pattern is the Fourier transform of the diffracting object.
Fresnel diffraction describes the diffracted light field a large distance away from the object compared to the wavelength of light, but not so large the curvature of the wavefront can be neglected. Fraunhofer diffraction describes the diffracted light field a large distance away from the object such that the curvature of the wavefront can be neglected.
Possible uses are:
• Describing image formation
• Modelling the aberrations of an optical system
• Studying the performance of a lens
• Modelling diffraction patterns and light propagation
• Optical signal processing
• Optical computing

Tuesday, 9 October 2007

Introduction to trust in CRN

quoted from BWN lab

CRN: Cognitive Radio Network
Cognitive Radio:
1. It knows the current degree of needs and future likelihood of needs of its users.
2. Learns and recognizes usage patterns from users.

Cognitive Radio Capability
1. Reconfigurable
2. Cognitive

1. Primary system (including licensed band and unlicensed band)
2. Cognitive radio system

The Definition of Trust
1. Trust of a party X to a party Y for a specific services S is the measurable belief of X in that Y behaves dependably for a specified period.
2. Trust can be seen as a directional relationship between trustor and trustee.

Trust Categorization
1. Access trust
2. Data trust
3. Operations trust
4. Communication trust

Some properties of Trust
1. Transitivity
2. Asymmetry
3. Dynamic Changing
4. Multi-level

CRN will consist of multiple service providers, access network operators, and different kinds of terminals.

Thursday, 4 October 2007

Firefox and Jajah

The JAJAH extension for Firefox integrates call functionality into your browser. Phone numbers on web pages are automatically detected and highlighted. When clicked, JAJAH initiates a phone call from your phone...
The JAJAH extension for Firefox integrates call functionality into your browser. Phone numbers on web pages are automatically detected and highlighted. When clicked, JAJAH initiates a phone call from your phone
- landline or mobile - to the desired destination. Alternatively, phone numbers can be entered directly in the toolbar, thus effectively combining phone communication with everyday web browsing.

Regardless of your phone company or plan, JAJAH allows long distance and international calls for less than 2 cents a minute - no monthly fee, no registration fee, no need for prepayment. Trial users receive 5 free minutes to experience the outstanding quality and simplicity of the JAJAH service.

For more info please visit

Tuesday, 4 September 2007

Optical Network

Although wireless communication is more and more popular and technologies that support high mobility and high data rate has been implemented such as WIMAX, 3.5G HSDPA, etc; wire network is truly the backbone of the Internet. Wire network like optical network has tremendous amounts of bandwidth and high data rate, no inter-symbol interference (ISI) and no inter-channel interference (ICI) while wireless communication is more convenient and can be set up on portable devices.

Thursday, 2 August 2007

Media Gateway Control Protocol

In computing, Media Gateway Control Protocol (MGCP) is a protocol used within a distributed Voice over IP system.

MGCP is defined in RFC 3435, which obsoletes an earlier definition in RFC 2705. It superseded the Simple Gateway Control Protocol (SGCP).

Another protocol for the same purpose is Megaco, a co-production of IETF (RFC 3525) and ITU (Recommendation H.248-1). Both protocols follow the guidelines of the API Media Gateway Control Protocol Architecture and Requirements at RFC 2805.


The distributed system is composed of a Call Agent (or Media Gateway Controller), at least one Media Gateway (MG) that performs the conversion of media signals between circuits and packets, and at least one Signaling Gateway (SG) when connected to the PSTN.
The Call Agent uses MGCP to tell the Media Gateway:
what events should be reported to the Call Agent
how endpoints should be connected together
what signals should be played on endpoints.
MGCP also allows the Call Agent to audit the current state of endpoints on a Media Gateway.
The Media Gateway uses MGCP to report events (such as off-hook, or dialed digits) to the Call Agent.

(While any Signaling Gateway is usually on the same physical switch as a Media Gateway, this needn't be so. The Call Agent does not use MGCP to control the Signaling Gateway; rather, SIGTRAN protocols are used to backhaul signaling between the Signaling Gateway and Call Agent).

In MGCP, every command has a transaction ID and receives a response.
Typically, a Media Gateway is configured with a list of Call Agents from which it may accept programming (where that list normally comprises only one or two Call Agents). In principle, event notifications may be sent to different Call Agents for each endpoint on the gateway (as programmed by the Call Agents, by setting the NotifiedEntity parameter). In practice however, it is usually desirable that at any given moment all endpoints on a gateway should be controlled by the same Call Agent; other Call Agents are available only to provide redundancy in the event that the primary Call Agent fails, or loses contact with the Media Gateway. In the event of such a failure it is the backup Call Agent's responsibility to reprogram the MG so that the gateway comes under the control of the backup Call Agent. Care is needed in such cases; two Call Agents may know that they have lost contact with one another, but this does not guarantee that they are not both attempting to control the same gateway. The ability to audit the gateway to determine which Call Agent is currently controlling can be used to resolve such conflicts.
MGCP assumes that the multiple Call Agents will maintain knowledge of device state among themselves (presumably with an unspecified protocol) or rebuild it if necessary (in the face of catastrophic failure). Its failover features take into account both planned and unplanned outages.

Protocol Overview

MGCP packets are unlike what you find in many other protocols. Usually wrapped in UDP port 2427, the MGCP datagrams are formatted with whitespace, much like you would expect to find in TCP protocols. An MGCP packet is either a command or a response.
Commands begin with a four-letter verb. Responses begin with a three number response code.
There are eight (8) command verbs: AUEP, AUCX, CRCX, DLCX, MDCX, NTFY, RQNT, RSIP
Two verbs are used by a Call Agent to query (the state of) a Media Gateway: AUEP - Audit Endpoint

AUCX - Audit Connection

Three verbs are used by a Call Agent to manage an RTP connection on a Media Gateway (a Media Gateway can also send a DLCX when it needs to delete a connection for its self-management):
CRCX - Create Connection
DLCX - Delete Connection
MDCX - Modify Connection

One verb is used by a Call Agent to request notification of events on the Media Gateway, and to request a Media Gateway to apply signals:
RQNT - Request for Notification

One verb is used by a Media Gateway to indicate to the Call Agent that it has detected an event for which the Call Agent had previously requested notification of (via the RQNT command verb):
NTFY - Notify

One verb is used by a Media Gateway to indicate to the Call Agent that it is in the process of restarting:
RSIP - Restart In Progress

Wednesday, 18 July 2007

The Inspiration Of Ldpc Codes

The full name of LDPC code is low-density parity-check code, which means only few 1’s are shown on the parity-check matrix. For a linear block code, such as LDPC codes, we can find a fundamental decoding condition, cHT=0 where c is a code word and H is a parity-check matrix. If we do the matrix multiplication, we may get the following results. For example, c0+c1+c3=0; c3+c4+c7=0 where ci is the ith coded bit. Therefore, c0 is only influenced by c1 and c3.

Gallager proposed LDPC codes in 1962. The inspiration of LDPC codes is from less influence among different coded bits. For any code word, every coded bit can be corrected by two other bits assumed that there are three 1’s in a parity-check matrix row. If intrinsic information (received probability) doesn’t affect every coded bit, we can correct error bits through other extrinsic information (propagated probability). As block size increases, the coded bits become less correlated. Thus, we think of every coded bit as an independent bit and do an iterative algorithm called belief propagation. Finally, the decoded bits will be determined by both intrinsic information and extrinsic information.

Sunday, 8 July 2007

Properties of LDPC Codes

1.Properties of LDPC Codes
1.1 Sparse Parity Matrix
A sparse matrix is a matrix populated with few ones in each row and column. More precisely, the ratio of nonzero entries into that matrix is kept low. This is the reason why they are called “low-density”. In particular, an (n,wc,wr) low-density-code is a code of block length n with a matrix like that of Fig..1 whose parity matrix has wc 1’s in each column, and wr 1’s in each row.

Fig. 1 low-density code matrix; N=20, wc =3, wr =4

1.2 Regular LDPC Codes
A regular LDPC code is a linear block code whose parity matrix H contains exactly wc 1’s in each column and wr = wc(n/n-k) 1’s in each row. The code rate is k/n and wc is extremely smaller than (n-k). Similarly, wr is extremely smaller than n. In [3], Markey shows that wc = 3 is necessary for good codes.

1.3 Irregular LDPC Codes
If the number of 1's per column or row is not constant, the code is an irregular LDPC code. It’s very easy to determine whether an LDPC code is regular or irregular through its graphical representation. Usually, irregular LDPC codes outperform regular LDPC codes.

2. Characteristics of LDPC Codes
In some specific situations, we can get the following characteristics.
2.1 Large dmin
Some facts can help LDPC codes achieve this goal. First, any two columns have an overlap of at most one 1. Secondly, the sparse property allows us to avoid over-lapping. Less over-lapping means high independence among different coded bits. The condition makes LDPC decoder good decoding ability and low bit error rate.

2.2 Low Complexity
The "Low Density" characteristic only applies to the H matrix (i.e., the parity check matrix). H matrix is a decision criterion and represents decoding algorithm. Thus, lower density in the H matrix yields low-complexity in the decoder.

3 Advantage of LDPC codes
3.1 Near-capacity Performance
Shannon's theory tells us that “long” and “random” codes achieve capacity. LDPC codes provide the solution and attain near-capacity performance. However, it doesn’t mean that an LDPC code is the best code. Different codes are good at different things. For example, Turbo Codes are better at low code rates, i.e., R = 1/2 and below.

We know that irregular LDPC codes have better performance than regular LDPC codes. Therefore, the key to a good LDPC code is how to design it. There are methodical procedures for designing LDPC codes, especially irregular ones. [4] proposes a class of efficiently encodable irregular LDPC codes which might be called extended IRA codes.

Wednesday, 27 June 2007

LDPC Codes History

The history of coding starts with the seminal work of Claude Shannon on the mathematical theory of communication in 1948. He demonstrated that errors induced by a noisy channel can be reduced to any desired level as long as the information rate is less than the capacity of the channel. The theoretical maximum information transfer rate is called Shannon limit.

In 1962, Gallager proposed a low-density parity-check code in his doctoral dissertation. LDPC codes are defined by a sparse parity-check matrix. They provided near-capacity performance but difficult implementation. Also, the concatenated RS and convolutional codes were considered perfectly suitable for error control coding. Thus, his remarkable thesis was forgotten by coding researchers for almost 20 years. In 1981, Tanner generalized LDPC codes and created a bipartite graph used to represent those codes. However, it was still ignored by coding theorists.

LDPC codes were noticed again by some researchers in the mid-1990’s. They began to investigate codes on graph and iterative decoding. Markey and other researchers discovered the advantage between linear block codes which generated by sparse matrix and iterative decoding based on belief propagation. And by that time the decoding complexity looked achievable. Since that time, a lot of papers have been published and LDPC has become popular so far.

Monday, 25 June 2007

WiMax to rack up '54 million subscribers by 2012'

[Quote from]

WiMax is set to rack up 54 million subscribers worldwide within five years - but only if it can capitalise on emerging markets.

A report from consultancy Senza Fili Consulting - WiMax: Ambitions and Reality - predicts the combined total of fixed, mobile and fixed, and mobile WiMax will amount to 54 million users by 2012.

According to the consultancy the best prospects for the tech are tied to the rollout of mobile services - and it predicts that by 2012 61 per cent of subscribers will use WiMax for mobile access.

Monica Paolini, author of the report, said the recent inclusion of WiMax as an IMT-2000 (International Mobile Telecommunications 2000) technology will enable mobile operators to deploy it more widely.

But the mobile market will take longer than fixed to grow because most mobile operators do not yet need a data-only wireless network to complement their 3G networks.

While 3G and wi-fi have managed to co-exist up to now the future impact of WiMax on mobile operators' 3G networks is uncertain.

But according to the report in the next five years WiMax will become a mature technology for mobile broadband access.

Thursday, 7 June 2007

10 things your phone will do in 10 years

Hi, Rick, Goker, Jerry, Carl, I am so feeling a little embarrassed for being so long in the member list but never willing to post or even comment on a posted article. No excuse, my bad..

As I am now working in the Mobile Device Software Dept. in a mobile phone manufacturing company, I got some deeper experience and thinking about this industry. I received a really good article from company internal spread mail, titled as "10 things your phone will do in 10 years", I would like to share it with all of you.

The cell phone used to be mainly about making phone calls, but those days are long gone.

The past decade has seen the device evolve into the Swiss Army Knife of consumer electronics. Not only can you take pictures and video with your phone, you can use it to send e-mails, chat on instant messengers, listen to music, get directions, and even watch television.
The technology has come a long way since the days of brick-shaped analog phones that barely fit in a purse, let alone a pocket. Two years ago, experts predicted that there would be 3 billion cell phone subscribers worldwide by 2010. Now it looks as if we'll pass the 3 billion mark by the end of this year.
As wireless-service operators continue to deploy third-generation, or 3G, networks, which support high-bandwidth applications such as video and Internet access, this phenomenal growth is likely to continue. But a big question for consumers is: what will these phones do? CNET talked to industry experts and executives and spent some time gazing into a crystal ball to come up with the following list of 10 things the average cell phone user will be doing with his or her phone in the not-too-distant future.

1. No wallet? No problem
A new technology standard called "near-field communications," or NFC, will turn cell phones into credit or debit cards. A chip is embedded in a phone that allows you to make a payment by using a touch-sensitive interface or by bringing the phone within a few centimeters of an NFC reader. Your credit card account or bank account is charged accordingly.
Unlike RFID (radio frequency identification) technology, which also can be used to make wireless payments, NFC technology allows for two-way communication, making it more secure. For example, an NFC-enabled handset could demand that a password or personal identification number be entered to complete the transaction.
The NFC mobile-payment application is currently in trials in the United States, Germany, Finland, the Netherlands, and a few other countries. The technology is widely used in Japan, where people use their phones to pay for everything from sodas dispensed in vending machines to subway cards. Nokia announced the first fully integrated NFC phone, the Nokia 6131 NFC, at the Consumer Electronics Show in Las Vegas in January, and the company is currently testing the 6131 with AT&T's Cingular Wireless in New York City.
Experts also note that NFC technology can be used for more than just retail transactions. It can be used to get data from an NFC-equipped business card, or to download tickets or other data from an NFC-equipped kiosk or poster.

2. The World Wide Web in your pocket
The promise of the mobile Internet has yet to live up to its hype. Users have had disappointing experiences with HTML Web sites that render poorly on handsets, or they've been forced to use stripped-down wireless application protocol, or WAP, sites that don't provide the same richness that they have come to expect on the wired Web. But as more phones come equipped with full HTML browsers, cell phones will truly become just another device used to access the Internet.
Today many smart phones already provide full HTML browsers. Nokia's latest N-series and E-series phones, which run Opera browsers for the Symbian operating system, are among the most advanced.
In the future, these mobile HTML browsers will make their way onto even the most basic phones. Motorola recently announced it is adding an HTML browser to its popular Razr phones.
So what will full Internet browsing mean for users? For one thing, it could accelerate the growth of mobile social networking. In the last couple of years, social-networking sites such as MySpace and YouTube have become hits. Now people are extending those social networks to their cell phones. In December, ABI Research said that almost 50 million people used social-networking sites on their mobile phones. That number is expected to grow to 174 million by 2011.
Mobile operators such as AT&T and Helio have a special deal with MySpace, and Verizon Wireless has a special deal with YouTube. Mobile phones could allow people to more seamlessly connect their virtual presence with their physical presence. But Charles Golvin, an analyst with Forrester Research, predicts that this fact alone could mean that people will form smaller, more-private social networks with their mobile phones instead of simply using the phones as extensions of the social networks they created using their PCs on sites like MySpace.
"Do you really want everyone on MySpace to be able to track where you are?" he says. "Cell phones are such personal devices, and they go with us everywhere. I think people will be more inclined to communicate among smaller groups who they already know and socialize with."

3. Location, location, location
Due to a Federal Communications Commission mandate that requires operators to locate people when they dial 911 in an emergency, a large number of mobile phones sold in the United States already have integrated GPS (global positioning system) chips. While these chips are used by some mobile operators to pinpoint users' locations when they're in danger, they can also be used to support a variety of location-related services.
The most obvious service is turn-by-turn navigation, which provides directions simply by allowing users to type in a destination. Satellites then locate the GPS-enabled device and map the device's location to the destination. A map can then be generated on the user's screen, along with text-based directions. Some devices will also "read" the directions to the user.
Verizon Wireless and Sprint Nextel already offer navigation services. Verizon charges $9.99 a month for the service and Sprint is offering the service for free when customers buy certain data packages. Handset makers Nokia and Motorola also plan to offer navigation map services. In February at the 3GSM Wireless trade show in Barcelona, Nokia introduced the 6110 Navigator, the company's first navigation-enabled handset designed for the mass market.
But location services will soon go far beyond navigation. GPS technology will also be used to enhance local search engines, so that when you type in the word "pizza" you get a list of local pizza parlors, rather than a list of pizza-related Web sites.
Media conglomerate IAC/InterActiveCorp, which owns more than 60 Internet brands, said recently it will use GPS-enabled search on its mobile Web site to help consumers find friends, shops, and services based on their locations. The application will be available on Sprint's network. IAC plans to add the feature later to some of its other Web sites, such as Ticketmaster and
Mobile virtual network operators Boost, Helio, and Disney Mobile are already offering tracking services that allow people to keep tabs on their kids or find their friends. Many of these services are beginning to come to market now, but by 2010 they should offer better accuracy and will also reach more mainstream users.

4. Search goes mobile
Mobile search will become a standard feature on all handsets over the next three years. Most phones will likely have search built into their main screens, with a search icon prominently featured next to the time and the icons depicting battery and signal strength. Some phones will actually have a search button on the keypad or protruding from the case. In April, Alltel Wireless announced that it would preinstall JumpTap's mobile search button on LG Electronics' LGVX8600 devices.
Helio's new smart phone, the Ocean, has a search feature that allows you to slide out the keyboard, type a keyword, hit Enter and immediately get results from Google, Yahoo, and Wikipedia.
While the big guys--Google and Yahoo--will certainly have a presence on mobile devices, "white label" services, such as one available from JumpTap, will also be popular because they allow carriers to brand the service as their own.

5. TV on the go-go
Mobile TV in all its forms is expected to explode in the next few years. IMS Research forecasts that by 2011 there will be more than 30 million mobile TV subscribers in the United States. The firm also predicts that almost 70 million handsets capable of receiving mobile TV will be shipped in the U.S. in 2011.
Consumers will have access to a wide range of TV possibilities on their phones, from original and professionally produced content to repurposed clips to live broadcasts and user-generated clips.
The mobile handset will become an extension of TV and computer screens at home, allowing consumers to time- and place-shift viewing. Sling Media already offers mobile users the option of viewing programming available on their home TVs on their Windows Mobile devices using a wireless data connection.
Four major cable operators working with Sprint Nextel--Comcast, Time Warner, Cox Communications, and Advance/Newhouse Communications--are also expected to expand some video programming to cell phones. Today they offer features such as remote programming for DVRs.
Over the next three years, broadcast TV networks designed to provide service for mobile devices will also emerge on the scene. Qualcomm's MediaFlo has already signed deals with Verizon Wireless and AT&T, which will use MediaFlo broadcast technology to distribute live TV programming to mobile subscribers. Another broadcast technology, known as DVB-H, will likely find a strong following in Europe.
Experts believe there will be a spike in mobile TV usage in 2008 when the Summer Olympics in Beijing are scheduled to take place. Many operators around the world expect to have their mobile video services up and running to air the games.

6. Simplifed surfing
Ever notice how many clicks it takes to find the one thing you're looking for on your phone? It's worse than counting how many licks it takes to get to the center of a Tootsie Roll Pop. But handset makers and mobile operators are hard at work trying to make phones easier to navigate and simpler to use.
The upcoming iPhone from Apple is a perfect example of how user interfaces will be improved. Apple fans are confident that the company has come up with another slick and intuitive design, just as it did for the iPod.
One aspect of the iPhone's interface that has been publicized is its use of sensory technology to detect when the device is rotated. This allows the phone to automatically render pictures on the screen in portrait (vertical) or landscape (horizontal) format. That allows the user to determine which format is best for viewing whatever is on the screen, be it a Web page, video, or photo.
In the future, motion-sensing technology, similar to that used in the Nintendo Wii game console, will also allow people to navigate their cell phone menus or the mobile Internet with a flick of their wrists.
But motion sensing is just one piece of the puzzle. Operators such as Verizon Wireless are redesigning their content menus to reduce the number of clicks users must endure to find what they want. Ryan Hughes, vice president of digital media programming for Verizon Wireless, said he believes that user interfaces will be customizable so that users can decide for themselves which applications will be displayed on their phones most prominently.
Motorola is already offering a customizable interface on the Razr 2, which the company claims will make searching for contacts, accessing applications, and messaging much easier.

7. Brainier radios
Many phones today are equipped with dual radios that let subscribers roam on differently configured cellular networks throughout the world, but in the next few years handset makers will also embed Wi-Fi technology into phones, allowing customers to use the devices in any Wi-Fi network hot spot.
T-Mobile USA has been experimenting with such a service for the past several months in its hometown of Seattle. The HotSpot @Home service, which is expected to launch across the country this summer, uses UMA (unlicensed mobile access) technology to allow phones to seamlessly switch calls between a Wi-Fi connection and a cellular connection, depending on which is available and most cost-effective at any particular moment.
T-Mobile HotSpot @Home costs $20 a month on top of a regular cell phone plan, and it delivers unlimited "voice over Wi-Fi" calls from T-Mobile's more than 8,000 hot spots and through any Wi-Fi access point in a home that is connected to a broadband Internet service.
These dual-mode Wi-Fi and cellular phones will also make it possible for users to use voice over Internet Protocol services like Skype to avoid roaming charges when they are traveling internationally, for example. Skype is already available on PocketPCs and Windows Mobile smart phones.

8. Your very own cell tower
Does your cell phone get bad reception inside your house, but works just fine when you stand on your porch? Mobile operators may soon ask you to help them improve cellular coverage in your home or office with small Wi-Fi-like routers that boost cellular signals.
These routers create what are called femto cells, or small personal cellular sites. And they could help solve a major problem for cellular operators who have trouble covering less-populated regions or have difficulty reaching users indoors.
The femto cell router has a cellular antenna to boost the available cellular signals in a small area. The device is then attached to a broadband connection, and uses voice over IP technology to connect cellular phone calls to the mobile operator's network.
Because cell phones use licensed spectrum, the devices would be tied to a particular carrier's network just like a handset. If a consumer wanted to switch carriers, he'd have to get a new femto cell router.
While no carriers in the U.S. have said they plan to use femto cell technology, several companies are already developing products for it. 3Way Networks, based in the U.K., and Ericsson, based in Sweden, each introduced femto cell devices in February.

9. Picture perfect
One of the most dramatic changes in cell phone technology over the past decade has been the emergence of the camera phone. Today roughly 41 percent of American households own a camera phone. In fact, you'd be hard-pressed to buy a phone today that doesn't have a camera.
By 2010 more than 1 billion mobile phones in the world will ship with an embedded camera, up from the 589 million camera phones that are expected to be sold in 2007, according to market research firm Gartner.
There's little doubt that the technology will improve, with high-end phones easily supporting 8-megapixel cameras. The Nokia N95 already offers a 5-megapixel camera. William Plummer, Nokia's North America vice president of sales and channel management for multimedia, says that in a few years users will likely be able to manipulate their images directly on their handsets, just as they would on a high-end digital camera or PC.
Some camera phones will also let users stream live video to friends, family, co-workers, or anyone else with a video-capable phone. Motorola's Razr 2, due out this summer, will support this feature. And AT&T will offer two-way video sharing as a service later this summer using devices made by LG Electronics.
Phones of the future will also come with multiple cameras that will provide additional functionality, Forrester's Golvin predicts. While one camera may provide high-end imaging for sharing pictures and video, a second, lower-end camera could be used for things like capturing two-dimensional bar codes known as QR codes.
QR code-reading software on camera phones eliminates the need to type in contacts or URLs. In Japan, they're already widely used to store addresses and URLs on the pages of magazines, with the codes pegged to both editorial content and advertisements. The addition of QR codes to business cards is also becoming common, greatly simplifying the task of entering contact information into mobile-phone address books.

10. Mad for mobile music
There's no question that mobile music is hot and will continue to grow in popularity. Mobile phone users around the globe are expected to spend $32.2 billion on music for their handsets by 2010, up from $13.7 billion in 2007, according to Gartner.
This content category includes everything from basic ringtones, "real tones" (uncompressed, digital representations of analog signals), and ring-back tones to more sophisticated full-track downloads. Music in all its incarnations is the second-most popular mobile data service, behind short message service (SMS), in terms of use and revenue.
Over the next couple of years, full-song downloads will drive growth in this category. The entrance of big brands like Apple into the mobile phone market will likely push mobile music to the forefront.
Apple's iPhone, announced in January, has created a kind of hysteria that has not been seen before in the consumer electronics market. The device, which combines Apple's popular iPod music player with a mobile smart phone, will go on sale in late June and will be available exclusively on AT&T's network. Even though critics have already noted some downsides to the iPhone--namely that it will not be 3G capable--it has still managed to raise the bar in terms of what's expected from a music playing phone.
All the major handset manufacturers are poised to offer iPhone competitors. Sony Ericsson has the Walkman series. Motorola has its Rokr, Z8, and MotoQ phones. And Research In Motion, best known for phones that cater to business users, has the new BlackBerry 8300 Curve, which comes with stereo Bluetooth, a true 3.5mm headphone jack, and a microSD expansion slot. All of these phones could be strong competitors, if not iPhone killers.
So what will be new in mobile music by 2010?
Most likely it will be more of the same. The line between phones and music players will increasingly blur. And if network operators, device makers and music studios are smart, there will be easier and more cost-effective ways for people to download their favorite tunes onto their phones.
Verizon Wireless, Helio, and Sprint Nextel already offer over-the-air downloads. But many experts believe that, in order to compete, all major operators will have to offer this convenience. And issues surrounding digital rights management--the use of software that limits the use and transfer of copyright material, including music and video files--will also likely be worked out in the next few years to allow users an easy and legal way to port songs from one device to another.

Tuesday, 5 June 2007


Jajah is a VoIP (Voice over IP) provider, founded by Austrians Roman Scharf and Daniel Mattes in 2005.[1] The Jajah headquarters are located in Mountain View, CA, USA, and Luxembourg. Jajah maintains a development centre in Israel.
Jajah's primary service, Jajah Web, takes an approach called web-activated
telephony, using VoIP to connect traditional phones (landline or mobile). Calls are made without download or user-installed software, and in most cases at rates lower than those of traditional phone companies or even free of charge.

Jajah Web connects existing traditional
landline or mobile phones with calls that are set up via Jajah's Web site. Callers type in their own number and their desired destination number in a Web form. The Jajah service first rings the caller. After the caller picks up the phone the destination number is then dialled and the connection is established.
Jajah claims that their service works with any standard
web browser. It does not require a broadband connection, nor is it necessary to be online when using the service, but it is necessary to have internet access to originate the call.
Dial-up internet users without a second phone line must schedule their call to be placed a few minutes in the future in order to allow for the time required to disconnect from their ISP and free up the phone line.

Jajah Free Global Calling
Jajah launched a service offering free calls globally on
28th June 2006. The service is limited to specified geographic areas, and Jajah has also adopted a Fair Use Policy which limit the amount of free Jajah calls.
Calls between registered Jajah users are free of charge for landline and mobile calls within the USA, Canada, China, Singapore, Hong Kong, Thailand and apply also for landline calls to and within most European countries as well as Argentina, Australia, Israel, Japan, Malaysia, Mexico, New Zealand, Venezuela and Zambia.
A further limitation is that scheduled calls and conference calls cannot be free.
(From Wikipedia)

Friday, 1 June 2007

PAL region

The PAL region is a video game publication territory which covers Australia, New Zealand, and varying European countries. The majority of games designated as part of the region will not play on NTSC-U/C or NTSC-J region consoles because of regional lockout. While this is the most common occurrence, some Xbox and Xbox 360 games are region-free encoded, since Microsoft's policy is for publishers to decide. Nintendo handhelds are region-free, but their consoles are not.

Games ported to PAL have historically been known for having game speed and framerates inferior to their NTSC counterparts. Since the NTSC standard is 60 frames per second but PAL is 50 frames per second, games were typically slowed down by approximately 17.5% in order to avoid timing problems or unfeasible code changes. In addition to this, PAL's increased resolution was not utilized during conversion, creating a pseudo letterbox effect with borders top and bottom, leaving the graphics with a slightly squashed look due to an incorrect aspect ratio caused by the borders. This was especially prevalent during previous generations when 2D graphics were used almost exclusively. The gameplay of many games with an emphasis on speed, such as the original Sonic The Hedgehog for the Sega Mega Drive, suffered in their PAL incarnations.
Despite the possibility and popularity of 60Hz PAL games, many high profile games, particularly for the
PS2 console, were released in 50Hz-only versions. Square Enix have long been criticized by PAL gamers for their poor PAL conversions. Final Fantasy X runs in 50Hz mode only, and 17.5% slower and bordered that while prevalent in previous generations was considered inexcusable at the time of release. In stark contrast, the Xbox featured a system-wide PAL60 option in the Dashboard and the overwhelming majority of PAL games offered 50 and 60Hz modes with no slowdown. Current generation PAL consoles such as the Xbox 360 and Wii also feature system-wide 60Hz support.
Nintendo's Wii Virtual Console service has been criticised due to PAL games running in 50Hz only, despite the ability to run in 60Hz mode.
In recent times, several PAL releases have lacked the standard PAL mode and offered 60Hz only, including
Metroid Prime 2 for the Nintendo Gamecube and Dead or Alive 4 for the Xbox 360.

From Wikipedia, the free encyclopedia


Sunday, 20 May 2007

A New Video Coding Standard in China

What's AVS?
“Audio and Video Coding Standard” A second generation source coding standard developed by AVS workgroup of China.

1. Inter prediction( Progressive and Interlaced): P frame and B frame
2. Interpolation: AVS supports to quarter-pel MV precision
3. Intra prediction(Luma and Chroma)
4. Transform coding: Coding scheme-> Transform -> Quantization
5. Entropy coding: Exp-Golomb coding for all syntax elements
6. In-loop deblocking: Determine boundary strength by coded model and apply filter of different strength.

To know more details, you can check the following website.

Power Line Communication

1. High transmission rate
2. No need for extra wire. No need for re-wiring.

Hostile transmission environment (Frequency-selective phenomena)

System review:
The backbone of the proposed system is IDMA.

Introduction to IDMA(Interleave-Division Multiple Access)
IDMA is a new method of digital communication that is very powerful and can approach the Shannon limit. For each user, the input data sequence is first encoded and then permutated. The key principle of IDMA is that the interleavers should be different for different users.

To know more details, please read the following paper.

Monday, 14 May 2007

Multimedia Watermarking

(quoted from Kwangwoon University digital media lab)
Speaker: Bede Liu(Princeton University)
Digital watermark
-Secondary information embedded in media data
-Watermarking and data hiding
-Natural images
-Document: binary images, sketches, maps, Check 21,…
-To render watermarking ineffective
-Legal issues

How to insert watermark
How to extract watermark

Spread spectrum embedding
-Place watermark in perceptually significant spectrum
-Use random vector as wmk to avoid artifacts

-scaling JPEG, dithering, cropping

-Need original unmarked image
-May need to perform registration
-Other attacks

How to embed watermark
-Additive embedding ex spread spectrum
-Forcing a relationship ex odd-even

Data hiding for error concealment
-recovery of blocks lost in transmission
-simple interpolation gives poor image quality

Edge estimation and interpolation
-Edge estimation from surrounding blocks

Error concealment using date hiding
-Need s good neighboring blocks
-Two steps of error concealment estimation edge

Shuffling for binary images
-Uneven distribution of flappable pixels among blocks
-Random shuffling “equalized” distribution

A dilemma
-To prove that a watermark W has been embedded in an image.
-Given inages->block DCT(32x32)->subset X
-Prove existence of W

Monday, 7 May 2007

Digital Life Style

(quoted from

4C: consumer products, car entertainment, communication, computer network

The difference between consumer and PC
What’s the difference when you stay in living room and study room?
You will spend much time to learn how to use Office/PowerPoint/ Outlook.
You won’t spend even a minute to learn how to use the TV remote controller.

Therefore, consumers don’t care about hardware inside but the functions and interface. It should be easy to use.

A fail example-PMP(Portable Media Player)
You can do other activities when you listen to music.
You can download music through internet in 1 minute.
The LCD panel of PMP is too small for use

A successful example-Digital Photo Frame
Put digital photos on the digital photo frame. It’s a simple concept and very useful.

Key feature of DPF -Functions in 2007
1. JPEG/TIFF/GIF/BMP/Raw data/
2. MPEG H.264
3. Speaker inside with audio decode functions
4. WiFi/ Power line network inside with DLNA
5. Support direct connection to Flickr, Picase, Webshots
6. Bluetooth and Digital TV function inside.

Digital home applications
1. DMA(digital media acceptor): Transmit the movies from computer to TV
2. IP STB: video on demand
3. IP TV: Encode cable TV signals and transmit to 3G mobile phone

Key features of digital consumer products
1. Must to have
2. Easy to use
3. Low cost
4. Low noise( without fan)
5. Low power(Especially for portable devices)
6. Internet connection (Nice to have)
7. Wireless (Nice to have)

Monday, 30 April 2007


Global Megatrends
1. Internationalization
2. Information networking
3. Distance of science and technology

The greatest engineering achievements of the 20th century
1. electrification
2. automobile
3. airplane
4. purification
5. electronics-Transistor
6. Radio and TV
7. agriculture
8. computer

First monolithic IC by R.N.Noyce
First enhancement MOSFET

Evolution of FinFET

Moore’s law (

First microprocessor
Itanium microprocessor

Top 30 world markets in year 2030
1. portable data communications
2. PC
3. Mobile phone service
4. CPU
5. Digital contents products
6. Magnetic memory
7. Electronic commerce

Monday, 16 April 2007

Canada's closely-watched tech giant

CBC News

Nortel Networks, the Canadian telecommunications equipment giant, began its corporate life in 1895 making equipment for traditional phone companies in Canada, a few years after Alexander Graham Bell invented the telephone. Originally, part of Bell Telephone, it morphed into Northern Telecom, and finally Nortel. The company remade itself as an Internet company in the 1990s and was often called the poster boy for companies making the transition into the new economy.

But when the Internet bubble burst in 2000, Nortel went from poster boy to whipping boy.
Major changes began at Nortel when John Roth took office in 1997 as the company's president and chief executive officer. He saw that the marketplace of communications was shifting from telephone technology to the Internet. The trick was figuring out how to speed up the process of getting new products and services into the market so Nortel could keep ahead of the fast-paced Web world. In the past, it often took as long as five years to complete a research and development project.

Under Roth's leadership, Nortel was dramatically restructured. Forums were created where nominated employess from every level gathered to help make the company more in tune with the wireless and optical marketplace. Nortel moved to outsourcing much of its production, resulting in the closure of 18 of the company's 24 plants.
The changes helped put Nortel at the top of its league. By some estimates, Nortel equipment was carrying 75 per cent of the Internet traffic in North America as the 1990s came to a close.
Nortel's growth was in part based on acquisitions. It went on frequent buying sprees, often using its own stock to make acquisitions. In 2000 alone, it bought 11 companies for a total of $19.7 billion US.

And then the bubble burst.
Nortel's best customers – telephone and data carriers – began warning that they would be drastically cutting back on their purchases of specialized Nortel equipment. Nortel's sales plunged by 50 per cent. The value of the companies Nortel had bought collapsed too. In less than a year, firms that Nortel had paid billions for were worth just hundreds of millions.
Following the dramatic downward revision in the company's outlook for 2001, some industry watchers (who used to be cheerleaders for Nortel) began questioning Roth's leadership and credibility, especially since the company earlier promised three times that it would meet its 2001 financial targets. Irate investors filed numerous lawsuits against the firm.

Roth – named Canada's "Business leader of the year" in 2000 – stepped down as president and CEO in early October 2001, replaced by Frank Dunn. The company announced another 10,000 job cuts and a third-quarter loss of $3.47 billion later that month.

The jobs cuts continued as Nortel struggled to deal with the unprecedented downturn in its business. By the end of 2001, Nortel had just 45,000 employees – half the workforce it had begun the year with.

The bottom line for 2001 was brutal – a loss of $27.3 billion US.
And 2002 brought more misery, and mere glimmers of hope. In February 2002, the company's chief financial officer, Terry Hungle, resigned following allegations that he broke the company's trading rules in some of his personal stock transactions.
In subsequent months, Nortel warned that business was still not picking up; its long-term debt was downgraded to "junk" status by both Moody's Investors Services and Standard & Poor's; and by October, its shares had plunged to just 69 cents – more than 99 per cent lower than where it had been barely two years earlier.
More job cuts brought the company's work force to 35,000 by the end of 2002, about one-third of the work force Nortel had at the start of 2001.
Then, two years of savage job-cutting started to pay off. In January 2003, Nortel reported better-than-expected results that led some analysts to raise their outlooks.
Nortel said it was on track to report a profit by the middle of the year.
As it turned out, the company managed a $54-million US profit in the first quarter – its first quarterly profit in three years.
Its stock price began to rally, topping $6 by September as it signed billions of dollars in new deals with Verizon Wireless in the U.S. and Orange in France.
Technology companies were again spending – not like they were in 1999 – but at least they were spending.
Nortel announced new deals with mobile phone companies Verizon and Orange. In October 2003, the company posted another quarterly profit, and in January 2004 it announced its first annual profit since 1997.
But in March 2004, Nortel warned that it would delay filing its audited financial statements for 2003 and would likely make more financial restatements, sending the stock plunging. The company then put its chief financial officer and controller on paid leave. The stock sank again.
Both the U.S. Securities and Exchange Commission and the Ontario Securities Commission began investigations in April 2004 of Nortel's earnings restatements.
Then on April 28, 2004, Nortel fired its top executive, Frank Dunn, and the two executives who had been on paid leave, and put four more on paid leave. It said a preliminary review suggests its calculated profit for 2003 will have to be reduced by 50 per cent.
In May 2004, the U.S. Attorney's Office in Dallas launched a criminal probe into Nortel, requesting documents going back to Jan. 1, 2000. The Ontario Public Service Employees Union Pension Trust filed a class-action lawsuit against Nortel a few days later.
Nortel's problems continued when it said on June 2, 2004, that its updated financial results for 2003 and the first quarter of 2004 still weren't ready. It subsequently missed three self-imposed reporting deadlines as it struggled to unravel the accounting mess left by the previous management.
Another criminal investigation into Nortel's accounting practices, this time by the RCMP, began in August 2004. Days later, Nortel cut 3,500 jobs, about 10 per cent of its workforce, and fired seven more people from its finance department over accounting problems. Nortel later announced the job cuts would total 3,250, with 950 of those jobs coming from Canada.
When Nortel finally filed its 2003 financials in January 2005, the revisions lowered the company's initially-stated profit of $732 million US to $434 million US. Its 2004 financials, reported in May 2005, showed that the company actually lost money that year – $51 million US. Revenues fell 3.6 per cent from 2003. CEO Bill Owens said he wasn't happy with the results, but said that Nortel, at last, was "now stable."
Just a month later, the company's president and chief operating officer, Gary Daichendt, and its chief technology officer, Gary Kunis, resigned. Both had been with the company less than three months.
In October 2005, Nortel picked former Motorola executive Mike Zafirovski to succeed Bill Owens as CEO. Several months later, Nortel announced it had put aside $2.5 billion US to settle some class-action lawsuits stemming from the company's 2004 accounting scandal.
In March 2006, Nortel once again announces its financial filings will be delayed and it will restate financial results for 2003, 2004 and the first nine months of 2005. It also announces a $2.2-billion US loss for the last quarter of 2005, due mainly to the cost of litigation to settle lawsuits from its shareholders.
In May 2006, Nortel Networks warned investors that its first quarter revenues would be flat or down slightly, and that it would post a slightly higher loss than in the first quarter of 2005. In a conference call with analysts, CEO Zafirovski pledged to "recreate" the "great company" that Nortel once was.
Just weeks later, CEO Zafirovski announced a further restructuring. The changes include the elimination of another 1,100 jobs, the creation of two new "centres of excellence," the conversion of the company's pension plan to one that doesn't guarantee a specific pension benefit, and a trimming of other retirement benefits. The company hopes to save $175 million US a year by 2008.
On Dec. 1, 2006, the company went ahead with a 1-for-10 stock consolidation. Its shares jumped 10-fold in price to over $24, but the number of shares plunged from 4.33 billion to 433 million.
But Nortel's restructuring efforts were not over yet. Despite shedding more than 60,000 jobs in six years, the company announced another 2,900 job cuts in February 2007 — a move that would bring the payroll down to 31,000. Another 1,000 jobs would switch to lower-cost countries like China, India and Mexico.
Even after all those cuts, Nortel is still North America's largest maker of telephone equipment.
But it was still making news for the wrong reasons. In March 2007, the U.S. Securities and Exchange Commission and the Ontario Securities Commission announced legal proceedings against former CEO Frank Dunn and three other former senior executives. The SEC accused the four of civil fraud relating to Nortel's accounting and its restatements. The OSC alleged that Dunn and two others broke securities laws by making "material misstatements" in Nortel’s financial filings that they knew or should have known were "materially misleading."

Audio Watermarking Algorithm for Copyright Protection

(Photo quoted from NeAsia)

Digital watermark technology is now drawing attention as a new method of protecting digital content from unauthorized one.

Critical band
It can be shown that the sensitivity of the ear to every frequency is not quite the same.

Masking effect
Frequency domain masking also called simultaneous masking. If two signals have close frequency, the louder one will make the weaker one inaudible. For example, conversation at a bus stop can be completely impossible if a loud bus is driving past. The masking phenomenon occurs because any loud sound will distort the Absolute Threshold of Hearing, making quieter, otherwise perceptible sounds inaudible.

(Wikipedia)In telecommunications, direct-sequence spread spectrum (DSSS) is a modulation technique. As with other spread-spectrum technologies, the transmitted signal takes up more bandwidth than the information signal that is being modulated. The name 'spread spectrum' comes from the fact that the carrier signals occur over the full bandwidth (spectrum) of a device's transmitting frequency.
Math formula Tb = NTc, where N is the length of PN sequence

Watermark embedding
(Quoted from MusicTrace) With the aid of the software products of the ContentMark product family, it is possible to embed additional information into audio signals. This additional information is transmitted to the final user hidden in the music in a form that is imperceptible to human hearing. A further characteristic is the fact that embedding of a watermark does not change the format. The final user does not have to purchase special player devices, instead he can still play these titles using conventional equipment.
The additional data are robustly embedded in the audio signal; this means that they cannot be removed by simple means. The objective of developing the audio watermark technique was to ensure that the watermark does not become unusable until intentional or inadvertent disturbances have degraded the audio quality to such an extent that the recorded title no longer has any economic value.
The information to be embedded is transferred in so-called data containers. Several data containers have already been developed, they differ in the volume of data that is to be transferred, the data rate and the robustness of the watermark. The two most commonly used data containers transfer 48 bits in 5 or 2.7 seconds. Other data containers can be generated as agreed with the customer.

Watermark extraction
(quoted from Watermark Extraction project (WatEx) was established in October 2004 as a Ph.D. research study, the aim of this project is to automatically retrieve and store paper watermarks in a digital representation in order to preserve its historical value, and to provide better access and distribution with the current Information and Computing Technology (ICT). The focus will be on the digital acquisition, and automatic processing and analysis of the visible paper-based watermark, probing beyond the paper surface data to extract the watermark design and to create a digital representation for long term preservation.

GPS receiver

(Photo quated from
Galileo system
1. Europe’s own global navigation satellite system
2. It ascertains one’s precise position in space
3. European Union support
(Wikipedia)Galileo is tasked with multiple objectives including the following: to provide a higher precision to all users than is currently available through GPS or GLONASS, to improve availability of positioning services at higher latitudes, and to provide an independent positioning system upon which European nations can rely even in times of war or political disagreement. The current project plan has the system as operational by 2011–12, three or four years later than originally anticipated.

GPS/Galileo Signal and Channel
GPS Galileo
Pilot channel none yes
BOC modulation none yes
Satellite code length 1ms 4ms

GPS signal
The Global Positioning System (GPS) is a worldwide radio-navigation system formed from a constellation of 24 satellites and their ground stations. It uses spread spectrum communication.

Design of complex filter
1.Complex filter is a bandpass filter with an asymmetric amplitude response
2.Use the transconductor and capacitor to replace resistor and inductor

Tuesday, 10 April 2007

GPRS Overview

GPRS functional overview

The General Packet Radio Service (GPRS) is a wireless packet data service that is
an extension to the GSM network. It provides an efficient method
to transfer data by optimizing the use of network resources. The GPRS radio
resources allocator allows to provide multiple radio channels to only one user in
order to reach high data user rate. Furthermore, one radio channel can be shared by
multiple users in order to optimize the radio resources. Then, the GPRS enables a
high spectrum efficiency by sharing time slots between different users, supporting
data rates up to 170 kbit/s and providing very low call set-up time.
Additionally, GPRS offers direct Internet Protocol (IP) connectivity in a
point-to-point or a point-to-multipoint mode and provides packet radio access to
external packet data networks (PDN).

GPRS introduces a minimum impact on the BSS infrastructure and no new physical
radio interface. The Nortel Networks GPRS network architecture is implemented
on the existing wireless infrastructure with the inclusion of the following network

BSS side:

-Packet Control Unit Support Node (PCUSN)
Core Network side:

-Serving GPRS Support Node (SGSN)
-Gateway GPRS Support Node (GGSN)
-SS7/IP Gateway (SIG)
Nortel Networks

Sunday, 8 April 2007

long-hop transimission in sensor networks

(photo quoted from
Application of Sensor Networks
1.In military
Intelligence, surveillance and reconnaissance.
2.In health
Monitor patients and assist disabled patients.
3.Other Commercial applications
Managing inventory, monitoring product quality and monitoring disaster areas.

Sensor network: small size, low cost, low-power
In some situations, log-hop transmission is better than short-hop transmission.

(Wikipedia)A wireless sensor network (WSN) is a wireless network consisting of spatially distributed autonomous devices using sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants, at different locations. The development of wireless sensor networks was originally motivated by military applications such as battlefield surveillance. However, wireless sensor networks are now used in many civilian application areas, including environment and habitat monitoring, healthcare applications, home automation, and traffic control.
In addition to one or more sensors, each node in a sensor network is typically equipped with a radio transceiver or other wireless communications device, a small microcontroller, and an energy source, usually a battery. The size of a single sensor node can vary from shoebox-sized nodes down to devices the size of grain of dust. The cost of sensor nodes is similarly variable, ranging from hundreds of dollars to a few cents, depending on the size of the sensor network and the complexity required of individual sensor nodes. Size and cost constraints on sensor nodes result in corresponding constraints on resources such as energy, memory, computational speed and bandwidth.
In computer science, wireless sensor networks are an active research area with numerous workshops and conferences arranged each year.

Tuesday, 3 April 2007

Optical Transmission

Optical Fiber
(photo quoted from European Space Agency)
Why? Because it is immune to electrical interferences, it does not radiate signals, it uses less duct space than cooper or coax, goes longer distances and has now very low cost and huge information capacity.

An optical fiber consists of a very thin core (where light travels) and a large cladding which keeps the light in the core. A coating on the outside protects the fiber. The core not being perfectly circular create the optical pulse to get distorted (giving optical non linear effects).

Optical transmission

A laser generates optical pulses (pulses of light) controlled by the incoming electrical signal ‘1’ and ‘0’. A light sensitive component (photo diode) detects these pulses of light and reconstitutes the original electrical signal ‘1’ and ‘0’. Usually optical transmission is on a fiber pair – one for transmit and one for receive however transmit and receive can also travel on the same fiber. Optical transmission circuit characteristics (such as in a synchronous network): fixed size; pipe exists whether data flows or not; no concept of congestion in a transmission network as the total size of the pipes coming in, adds up exactly to the total size of the pipes going out.

Key differences between Metro and Long Haul networks
Metro Networks have a large range of services for 1.5M to 10G (DS1, DS3, Optical Ethernet, ESCON, FibreChannel, etc).

They are rapidly changing networks as new nodes are added for new customers, have short distances between nodes and lots of Network Elements hence need to keep NE cost to a minimum. Long Haul networks provide big transport pipes (moving to Terabits per fiber pair); a more stable network topology than Metro networks, less services than in the Metro (34M/45M, 2.5G, 10G, 40G future, GbE, 10GbE future) and greater distances between nodes (100’s of km). Long haul networks can be classified as backbone (many points, average circuit length less than 600km) and express (circuit length greater than 1000km).

Transmission requirements:
Optics performance adapted to the distance (cheap optics for the metro and optics to go thousands of km in the Long Haul); Flexibility (for instance in terms of traffic add/drop at a node or size of junction) and best use of fiber (which means 100’s of wavelengths in the Long Haul and 10’s in the Metro).


How to get the maximum capacity on a link: have the max number of bits per second for a signal (TDM) and have the max number of optical signals sharing a fiber (WDM).
Multiplexing: way to allow signals to share the same medium with each signal having the illusion to have their own line.

TDM gives a time slot to each signal. This means that the position in time determines which signal it is.

Multiplexing examples: 24 phones calls are multiplexed into a T1 in North America. 30 phones calls are multiplexed into an E1 outside North America.
WDM allows different optical signals (different bit rate and protocol) to share the same fiber by giving each signal a different frequency or color.

Types of circuit:

Fixed point to point: no bandwidth management/signal allocation flexibility – all signals are multiplexed at one end and demultiplexed at the other end. In synchronous networks (TDM) the mux is called Terminal mux or Line System. In optical networks it is an Optical Mux/Demux which multiplexes different optical signals.

Flexible networks can be a mesh of cross connect or switches or/and rings of ADM (Add Drop Muxes). A Cross connect is a piece of equipment with lots of ports: semi permanent connections between ports are under the control of the cross connect management system (not the end user of the network). Crossconnect with high capacity optical interfaces are called switches. They can be electrical inside (opaque switch) or purely optical (photonic switch).

An ADM allows traffic to get in and out of the main traffic flow.

Architecture for resilience:
A ring provides 2 ways to connect 2 points hence provides a fast protection mechanism; a Mesh provides various levels of protection (versus just protected or not) but this is more complex than a ring; Point to point system can be protected by sending the signal simultaneously on 2 transmitters and the receiver at the other end selects the signal.

Optical transmission:
Parameters affecting light transmission: attenuation causes the light pulse to loose intensity (the light pulse gets smaller); chromatic dispersion causes the light pulse to broaden.
Attenuation: Regenerators and amplifiers control attenuation. A regenerator terminates the optical signal, meaning that it converts the signal back to the original ‘1’ and ‘0’ and from that generates a brand new optical signal again. An amplifier gives energy to the optical signal and allows it to go further (a single amplifier amplifies an optical signal made of several wavelengths).

Amplifiers replace a regenerator ‘mountains’ since regenerator acts on a single channel and at a regenerator site, the signal has to be optically demultiplexed for each signal to be regenerated. Amplifier can be cascaded up to a certain number then regenerators need to be used.
Dispersion: Fiber types, Dispersion Compensation Module (DCM) and types of transmitter can be used to control dispersion;
Laying new fiber is expensive and some networks have already existing standard fiber;
DCM (length of special fiber) can be inserted in the network to compensate for dispersion;
Laser modulation in the transmitter controls dispersion too. Directly modulated transmitters are cheap however the laser going on and off creates heavy dispersion hence these transmitters are suitable only for short distances.

Externally modulated lasers (laser stays on and an external circuit masks the light to create the pulses) provide better pulse for long distances.

Nortel Networks

Monday, 2 April 2007

RF Transceiver Design

(photo quoted from European Space Agency)
1. Search RF spec.’s from communication standards
Take IEEE 802.11a/b/g as an example: You can read standards from IEEE802.11a Sub clause 17.3
-Operation frequency band and channel number
-TX and RX in-Band and out-of-band spurious emissions: Conform to national regulations. For Europe, ETS range is 300-328
-Operating temperature: 00C to 400C for office environment
-Transmitted power: IEEE802.11b 1000mW
-Transmitted spectrum mask
-Transmitted center frequency tolerance
-Allowed relative constellation error versus data rate.
-Transmit modulation accuracy.
-Power-on ramp and power-down ramp
-Receiver minimum input level sensitivity
-Adjacent channel rejection: Interfering signal power at adjacent channel, referenced to the desired signal level set at 3dB above th3 sensitivity, for packet error rate <10%>
-Receiver maximum input level: -30dBm measured at that the antenna connector for a minimum error rate.
2. RF Transceiver Architecture and link budget
-Dual-band transceiver architecture: Heterodyne or homodyne, Switched or concurrent dual-based
-Receiver power and gain budget: IEEE 802.11a max power (-30dBm) to sensitivity (-82dBm)
-Transmitter power and gain budget
3. RF Circuit Design
-Dual-band switch
-Dual-band VCO and frequency synthesizer
4. RF Transceiver Integration and Measurement
5. Conclusion(Five steps toward accomplishment)
-Search RF spec. from studying communication standard
-Determine RF transceiver architecture
-Calculate and simulate transceiver link to determine RF sub-circuit spec.
-Design RF sub-circuits with proper technologies such as CMOS/HBT/pHEMT MMIC, HMIC
-RF transceiver integration and measurement

Sunday, 1 April 2007

Spontaneous Speech

(photo quoted from Nippon Telegraph and Telephone Corporation )
Spontaneous speech requires no special training, efficient, minimal cognitive load and wealth of information at multiple levels. There are some features such as variable speaking rate, greater use of short words and function words, and a tendency towards more coarticulation and generally less precise coarticulation.

1. Recovering hidden punctuation
Punctuation is everything in written language other than the actual letters, including punctuation marks and inter-word spaces.
2. Coping with disfluences
Disfluency: Lack of skilfulness in speaking or writing, such as filled pauses, repetitions, repairs, false starts
3. Allowing for realistic turn-taking
Listeners project the end of a current speaker’s turn using syntax, semantics, pragmatics, and prosody.
4. Hearing more than words
Distinguish emotion and do user state detection. Especially, it’s important for certain dialog system application