Tuesday 31 August 2010

The History of Light Bulbs

One of the first electrical effects used to produce light was incandescence. This is emission of light from a heated body. First electrical incandescent light was created in 1802 by British chemist and inventor Sir Humphry Davy. He used platinum strip through which he passed electric current. Platinum was chosen because it has a very high melting point. This incandescent light had two major flaws which prevented practical applications. The light was not bright enough and it only lasted for a short time. But this experiment is important because the first practical incandescent bulbs appeared almost 80 years later.

During the 19th century many experimenters tried various materials and designs. It was British scientist Warren de la Rue who came to the idea to put platinum filament into a vacuum tube. Vacuum is essential because it prevents air molecules to react with the filament which reduces its life. Unfortunately this invention was still not practical because the high cost of platinum. Many patents for incandescent bulbs were granted for various implementations including the one with carbon filament. Because many rivals were working on similar projects some tried to bypass patents which lead also to few law suits.

Many technical problems had to be solved in order to make a bulb for commercial production. High vacuum is essential for long operation. Until 1870s there were no pumps which could make a satisfactory high vacuum for light bulbs. With the use of Sprengel pump it was possible to easily achieve required vacuum. This pump was one of the key factors that contributed to the success of incandescent light bulbs. The material used for the filament is also very important. It must produce bright light, have long life and should be cheap enough for mass production. Many bulbs at that time used carbon filament which was far from the ideal material. In 1904 tungsten filament was patented and Hungarian company Tungsram started production. It was also found that if the bulb is filled with inert gas it has higher luminosity and the effect of blackening is reduced.

Today this kind of bulbs is produced in millions. Unfortunately, cheap production is the only advantage of incandescent bulbs. They are very inefficient--only few percent of the electrical energy is converted into light, the rest is dissipated as heat. There are some attempts to increase the efficiency of incandescent lamps but this will not change the overall picture of inefficient lighting. Therefore, many countries have taken steps to replace them with more efficient compact fluorescent lamps.

Most bulbs create light which is slightly colored. But there are also full spectrum bulbs which can reproduce natural sunlight. Such bulbs are used in environments where accurate color reproduction is important. But full-spectrum light bulbs can also be used at home. For many people natural white light creates a very pleasant living environment.

Compiler Design - Practical Compiler Construction

Good optimizing compiler is a must for any computer platform. The availability and quality of compilers will determine the success of the platform. Compiler design is a science. There are numerous books written about compiler principles, compiler design, compiler construction, optimizing compilers, etc. Usually compilers are part of the integrated development environment (IDE). You write a program in some high-level language, click compile and a moment later you get executable code, assembler listing, map file and a detailed report on memory usage.

As a programmer and IDE user you expect fast compilation and optimized generated code. Usually you are not interested in compiler internals. However, to build your own compiler you need a lot of knowledge, skills and experience. Compiler construction is a science. Compilation is a symphony of data structures and algorithms. Storing compiler data in proper structures and using smart algorithms will determine the quality of the compiler. Compilers usually have no user interface, they process source files and generate output files: executable code, object file, assembler source, or any other needed file.

Why are compiler data structures so important?

Compiler needs to store identifiers in symbol tables. Each identifier has some attributes like type, size, value, scope, visibility, etc. Compiler must be able to quickly search the symbol table for particular identifier, store new identifier into symbol table with minimum changes to it, etc. To satisfy all these requirements it is mandatory to carefully design symbol table structure. Usually hash tables are used for quick search and linked lists for simple addition or deletion of elements. Symbol table management is one of the critical elements of any compiler. Another important internal data structure is intermediate code representation. It is a code that is generated from the source language and from which the target code is generated. Intermediate code is usually used to apply optimizations, so right form of intermediate code with general syntax and detailed information is crucial for successful code optimizations.

To convert source code to target instructions is not a big deal. A limited set of registers can sometimes require some smart register saving techniques, but in general each source language statement can be translated into a set of target instructions to perform required effect. If you do not take actions to optimize the generated code you will have a very inefficient code with many redundant loads and register transfers. Excellent optimizations are the shinning jewel of any compiler. With some average optimizations you can easily reduce generated code size for 10% or more. Code size reduction in loops means also increased execution speed. There are many algorithms for compiler optimizations. Most are based on control flow and data flow analysis. Optimizations like constant folding, integer arithmetic optimizations, dead code elimination, branch elimination, code-block reordering, loop-invariant code motion, loop inversion, induction variable elimination, instruction selection, instruction combining, register allocation, common sub-expression elimination, peephole optimization and many others can make initially generated code almost perfect.


If you have ever tried to write a compiler then you probably already know that this is not a simple task. You can use some compiler generators, or write the compiler from scratch. Compiler construction kits, parser generators, lexical analyzer generators (lexers), optimizer generators and similar tools provide the environment where you define your language and enable the compiler construction tools to generate the source code for your compiler. However, to make a fast and compact compiler you need to design your own compiler building blocks, from architecture of symbol tables and scanner to code generator and linker. Before you start reinventing the wheel it is a good idea to read some books about compiler design and examine the source code of some existing compiler. Writing your own compiler can be a great fun.

There are many excellent books on compiler design and implementation. However, the best book on compiler design is the compiler itself. Take a look at Turbo Pascal compiler written in Turbo Pascal. This source code shows all the beauty of the Pascal programming language. It reveals all the tricks needed to build a fast and compact compiler for any language, not just Pascal.

Monday 30 August 2010

Recording Studio Software Selection

When you start looking for some recording studio software you are usually focused on nice screen shots, fancy descriptions and price tag. However, most people never check the most important factors when selecting recording software. You should first ask yourself why do you need the software, what features are the most important for you, do you already have a computer or you would buy a new one, do you need compatibility with other recording studios, will this software work with your audio hardware or do you need to purchase also a new, dedicated sound card, etc. Selecting right recording studio software is not a simple task.

Why do you need recording studio software? Because every home or professional recording studio uses computers for audio production. Audio computers and software are indispensable tools in every recording studio. You can use computers with appropriate software not just for recording but also for editing, adding effects, sound synthesis, filtering, mastering, archiving, transfer, etc. It is also possible to build a cheap home recording studio with ordinary PC and some popular audio recording software. Computers and audio recording software have become an essential part of every recording studio.


What features do you need? Well, this depends on the individual requirements and studio type. Every recording software supports recording, editing and playback. You should decide if you need more emphasis on audio recording features (level matching, spectrum analysis, audio compression, conversion tools, mastering, etc.) or you need more emphasis on MIDI devices, instruments and sampling (sequencers, wave table synthesis, drum machines, musical notation, etc.). Many recording studio software solutions have good support for both, audio operations and musical instruments.

Do you already have a sound card or you don't but can afford to buy a new one? Most music recording software works with standard sound cards that are supported by the operating system, while some of them work only with proprietary audio hardware. One such example is Pro Tools which works only with special Digidesgn or M-Audio hardware. The basic rule is that you first select your audio hardware according to the needs of your studio. The next step is to find a suitable audio recording software. However, in some cases these steps can be reversed. If you know exactly what software you need or would like to have then you need to find a sound card that is supported and has the connections compatible with your recording studio equipment.

If you don't have a computer yet, then you might have a dilemma. Mac or PC? The answer is not simple. Both kinds work well and are found in recording studios. Most recording studio software solutions work on both platforms but not all. If you already have a computer that you intend to use then this question is redundant. Otherwise it is a good idea to check availability of chosen audio recording software on Mac and PC.

Do you need to transfer your music projects to another studio? If the answer is yes, then your software needs to be compatible with software in studios you intend to work with. A typical example of standard recording studio software which excels in compatibility is Pro Tools. All versions of Pro Tools support the "Pro Tools file" so you can easily transfer projects between studios that use Pro Tools. Of course, every audio software supports standard audio file formats like wav and mp3 for reading and for writing. The compatibility between different recording software is declared on the level of project files. This means that you save your unfinished project in one studio and open it in another one where it can be finished.

Price? The price of the most popular audio recording software starts at about $100. This is a very small amount comparing to the total price of your studio equipment. Therefore, the price of the software should be one of the last factors when selecting software for your studio.

Selecting the right recording studio software can be a difficult task but you can simplify it by knowing what exactly you need and how you will use the software in your recording studio.

You can find more information about software used in recording studios at Recording Studio Software website which is dedicated to recording studios, computers and software. Here you can read more about Macs, PCs, recording studio software selection, recording studio design, and you can also check supported features and compare various recording studio software.

Sunday 29 August 2010

Radio Mobile - Free Radio Planning Software

Radio wave propagation is a very complex process. It is impossible to precisely predict how waves will spread and what values we will measure at some particular point. Therefore we have some mathematical models to approximate radio propagation and to calculate field strength values according to predefined probability.

The propagation model, which is based on electromagnetic theory and on statistical analysis of both terrain features and radio measurements, predicts the median attenuation of the radio signal as a function of distance and the variability of the signal in time and in space. Propagation models define curves and formulas. With such model we can predict propagation in arbitrary distance from the transmitter.

Using curves and mathematical formulas manually is a very tough task. In order to automate and speed up such calculations many software tools were developed. Normally, radio planning software packages are used by broadcasters, mobile operators and radio frequency authorities. Because of complexity and small number of potential customers, the price of such software package can be pretty high. But there is one exception to this rule.

Radio Mobile is a software tool for radio propagation calculation. The author has decided to publish it under the "freeware" license which means that the software is freely available on the web. This software is primarily dedicated to amateur radio, however, it can be also used in other areas including broadcasting and professional mobile communications.

Calculations are based on the Longley-Rice propagation model. This is a general purpose radio propagation model for frequencies between 20 MHz and 20 Ghz. In order to use the software you also need terrain model data. This data is needed for calculations of effective heights which are needed to perform propagation calculations. You can find some terrain model data sources on the web. They offer free DTM (Digital Terrain Model) data which is enough to start working with the Radio Mobile.

Radio Mobile supports many fancy features like possibility to use DTM in various formats (DTED, SRTM), also in many layers with different resolution, possibility to add map pictures in raster format (BMP, JPEG, GIF, TIFF, PNG), possibility to calculate interference between radio stations and lots of functions to customize the view.


Basic way of use consists of entering radio stations with all parameters (if they are not entered yet) and calculation of the coverage. Here you have a lot of options to customize the calculation and view. Results can be saved for later use. It is also possible to define calculation step for coarse calculations to save some time.

For result display it is usually not enough to display it on the gray scale DTM. You would probably like to see it on some map. For this purpose you can use any map picture with known coordinates. The software allows you to enter coordinates for each picture in order to display it correctly over the DTM.

A very useful function is the calculation of interference between radio stations. You define protection ratio and minimum field strength for calculation and the software marks interference area with predefined color.

The software is also very useful for microwave link planning. It allows you to display terrain profile between two points, display of optical visibility from any point and calculation of link parameters. Although this software is free and dedicated for amateur radio it can be very useful tool for daily radio communication tasks.

Saturday 28 August 2010

Digital Dividend - A Gap Between Digital Terrestrial Television and Miserable Failure

The first analog television broadcasts were made before World War II. Since late 1950s we have color television. Starting from 1980s there were many attempts to improve or extend analog television. All these experiments failed. The first real possibility for a significant advance in terrestrial television broadcasting came in 1990s when MPEG compression was successfully introduced into broadcast applications. And with first practical COFDM modulators the last major problem for digital broadcasting was solved.

The first attempts for digital terrestrial broadcasting were initiated because in some areas there was no free spectrum for new TV channels. Digital broadcasting allows us to use spectrum more efficiently since many TV channels and other services can share the same bandwidth which for analog television is used to broadcast only one TV channel. There is quite a lot of radio-frequency spectrum allocated for broadcast television. In Europe there are two frequency bands dedicated for this service: 174 MHz to 230 MHz in VHF band and 470 MHz to 862 MHz in UHF band. In 1961 there was a conference in Stockholm where individual frequency channels were assigned to each country. To avoid interference strict rules and procedures were defined and each country could only use those frequencies that are agreed by other countries. This had a practical consequence that on major transmitting sites only about 4 channels were available in UHF band and 1 in VHF band.

Many countries have come to the situation where expansion of terrestrial television was not possible because there were no free channels available. Digital terrestrial broadcasting was seen as a solution to this problem. First experiments were successful and many countries wanted to have some rules to extend the Stockholm 61 agreement. These rules were put into an agreement called Chester 97 named after the place where in 1997 a conference was held. This agreement defined additional rules for digital terrestrial broadcasting. But this agreement was only a temporary solution because with digital broadcasting it is possible to use spectrum more efficiently and it would be possible to make a brand new plan which will take into account different protection ratios between digital signals.

The "final" solution for new frequency plan for digital broadcasting was finalized at the Regional Radio Conference held in Geneva in 2006 (RRC-06). One of the outputs of this conference was a new digital frequency plan for digital broadcasting (GE06D). This plan replaces the old Stockholm plan (ST61) for analog television and provides a framework for detailed national planning. Because of digital broadcasting, different protection ratios and better tools to analyze possible interference it was possible to make a plan for 7 layers on UHF. Multiplying each layer with the number of TV channels in one multiplex we get quite a significant increase in total number of potentially available TV channels. At least in theory, this new plan should satisfy future broadcasting needs for at least for the next 20 years.

With DVB-T, MPEG-4 and statistical multiplexing you can easily accommodate 10 SDTV channels with decent quality in one multiplex. For HDTV this figure is about 3 to 5 channels, and with DVB-T2 you get an additional gain of up to 50% of capacity. But since the frequencies are now available and TV broadcasting is expanding, new TV services will come in soon future. Terrestrial broadcasting is still an important platform for TV distribution in many countries. This means that the capacity currently available for terrestrial broadcasting will be used also for HD services and many TV channels now available in SD will in the future migrate to HD. This means that the total number of TV channels available on the terrestrial platform will be slightly lower as it would be if only SDTV would be used. However, taking into account all the layers that were planned there would be no problem.

Unfortunately the reality differs from the theory. Immediately after the conference the European Commission got hit by an idea that, since the digital broadcasting is more spectrum-efficient, there should be some part of the spectrum which should be released after the digital switch-over. This part of the spectrum was called the "digital dividend". Of course, this idea was supported (and probably initiated) by the mobile industry. The appetite for some additional frequency spectrum for mobile services is enormous. But there is a problem. This consideration was not taken into account during the preparation of the new frequency plan or at the conference. There is no spectrum that will be released after digital switch-over. The same frequency bands in VHF and UHF that were available for analog television were now planned for digital television. With some compromises many countries were able to tightly squeeze 7 layers of coverage on the UHF band. This means that, at least in general, there is no gap, no possibility to add anything else into the UHF frequency band allocated to terrestrial television. Even changes to the existing plan would be very difficult.

So the Commission instructed technical bodies to investigate technical possibility to find a part of the UHF spectrum, currently allocated to terrestrial television, for possibility of implementing new services. These activities came to the proposal for using TV channels 61 to 69 (frequencies from 790 MHz to 862 MHz) for mobile services on a non-mandatory basis. This means that each country can decide whether it will use this band for broadcasting or for mobile services. In theory this sounds like a good compromise, but in practice this is a very bad solution with limited or no usability and a lot of problems.

All the countries participating at the RRC-06 put a lot of hard word and efforts to make a new digital plan. This plan is using all the available channels in the UHF band, from 21 to 69. This means that countries got their frequency rights also on channels from 61 to 69. Implementing mobile services would mean moving broadcasting out of this band. But moving where? There is no available spectrum for this since the whole UHF band for television was tightly planned for digital broadcasting. Implementing mobile services means simply to forget about broadcasting in this band. This, of course, means less capacity for broadcasting.

There are many aspects of the digital dividend approach.

  • The Stockholm 61 analog plan lasted for 45 years. All European countries put a lot of effort to make a new digital frequency plan. The Geneva 06 plan is a result of many years of hard work, negotiations and compromises. Immediately after the conference EC started the activities to destroy it.
  • The demand for broadcasting spectrum will grow. HDTV on the terrestrial platform is reality. Now there is more capacity for broadcasting as it was with analog television but not so much as it was initially assumed. Broadcasting is evolving and spectrum capacity for new services yet to be developed will be needed.
  • There is probably no country that would give up 72 MHz (9 TV channels) of broadcasting spectrum unless there is a very good and profitable reason. Currently the anticipated mobile services don't seem to be such reason.
  • The GE06 plan for digital broadcasting is built on the principle of equitable access to the spectrum. Releasing the digital dividend band will create inequitable access to the broadcasting spectrum since some countries will loose up to 20% of the rights in channels above 60.
  • Implementing new mobile services in this band is useless if this approach is not accepted with all countries. If one country decides to keep broadcasting in this band it will affect at least all neighboring countries.
  • Significant technical incompatibility between broadcasting and mobile services. There is huge difference in network topology and field strengths. Planned mobile services in the digital dividend band will cause interference to DVB-T reception. This may result in the inefficient use of spectrum.
  • Mobile services are never free. The mobile operator is always making profit. The existing models of providing mobile broadband are purely commercial. On the other hand, terrestrial broadcasting is free and is available to everybody. There is no country that has no free-to-air terrestrial broadcasting. Public service broadcasters strive to deliver high quality content to all segments of the population. They use money to make quality content and provide public services, and not the other way around. Digital terrestrial television shall remain a competitive platform. An attractive number of commercial and public services is always in the public interest.

A term 'dividend' denotes the monetary reward (payback) for your investment that you expect, and sometimes get, at the end of a business cycle. The digital dividend is similar - it is the payback for the investment in the digitization of television broadcasting. The investment in digital broadcasting is made partly by the broadcasters that have to change their transmitters but mostly by the viewers that have to change their receivers. So the broadcasters and viewers will pay for a change of technology that will free some spectrum for commercial mobile services. Who invests and who gets the dividend?

Friday 27 August 2010

Turbo Pascal Download - How to Compile Old Projects

Turbo Pascal was probably the most widely used Pascal compiler of all times. Borland released it in early 1980s and at that time it was available on the CP/M and PC platform. It featured fast compiler, integrated development environment and a very affordable price. Its syntax, known also as Object Pascal, has become standard and the concept of units is still used in all modern implementations. Until recently Pascal programming language was taught in many schools. For many people it was the first step into computer programming. It is a language that is easy to write and easy to read so you need very few comments to understand what the program does.

Turbo Pascal in 1990s evolved into Delphi. This is a rapid application development tool for Windows. It still uses Object Pascal with many additional features. However, because of popularity in early years, there are many projects that were developed with Borland DOS compilers. If you would like to compile such project you would need the original compiler, most likely version 7.0 which was the last released DOS version. Unfortunately, this version is no longer available. Borland has some time ago released old versions of compilers free of charge: 1.0, 3.02 and 5.5. The last version 7.0 is not yet available.


If you would like to compile the old Pascal code you can either try to find the original compiler from one of the illegal sources or you can use the open-source Free Pascal in compatible mode. There is also a third option. You can use the TPC32 command line compiler which is available as part of the demo package of the TPC32 source files and can be downloaded for free. This is not some limited version, it is a fully functional compiler compatible with TPC.EXE command line compiler. It is called demo because the full version includes complete source files which are not available for free.

TPC32 is a successor of the TPC16, a compatible compiler written in Turbo Pascal 7. It is compatible with the original Borland compiler in all aspects. Compiles the same source files and generates binary compatible unit and executable files. TPC32 is still the same compiler, the sources were slightly modified to be compatible with Delphi 7 which doesn't use the old segment-offset memory model. TPC32 still generates 16-bit x86 code. Source code of both compilers is available for purchase. You can use this source code to understand the internal data structures and algorithms of the famous Borland product or to make your own compiler. Both compilers are also available in demo versions which include fully functional compiled executable files. Because TPC16 is a DOS application it has some memory limitations. The TPC32 is a Win32 application and uses flat memory model with very few limitations. Both compilers can be used to compile the old projects created with the original Borland tools.

Pascal programming language is now rarely used in schools, but for some programmers it is still very popular and many old projects are still maintained. TPC32 compiler might be a solution for those who need a cheap and legal solution to compile the old Pascal sources.

I'm a big fan of Pascal programming language. Therefore I have a lot of old projects created with Turbo/Borland Pascal. If you need a free compatible solution to compile the old Pascal projects you can download a demo version of the TPC32, Turbo Pascal compiler written in Delphi, which contains a fully functional command line compiler. You can also get the TPC32 source code. It can be used for your own project or as a great book on compiler design and implementation.

Thursday 26 August 2010

FM Radio - Any Digital Alternative?

FM radio is a well known and used technology. It is used all around the world. There are some minor differences in modulation parameters and frequency bands but the basic principle is the same. It is amazing how popular this radio has become. FM radio receivers are found everywhere, even in mobile phones. In the last decade broadcasting has made a big step toward digital technologies. We are now in a phase of transition from analog television broadcasting to various forms of digital broadcasting. And television is far more complex than radio--simple stereo sound service. Why is there no suitable technology for digital radio?


The answer is pretty simple. We have to look at key aspects of the transition of television broadcasting. Analog television uses one frequency channel (from 6 to 8 MHz bandwidth ) for one program. Digital television broadcasting is using the same radio-frequency channel to broadcast multiplex--a digital package of many TV programs and other services. The advantage is obvious--using the same radio-frequency spectrum we can now broadcast many TV channels and other services. Therefore, digital television broadcasting means more efficient use of frequency spectrum. There is another very important aspect of digital TV broadcasting. Since both technologies are using radio-frequency channels with the same bandwidth it is possible to switch from analog to digital step by step. Such change from one technology to another usually takes years and needs detailed preparations on a large scale.

To switch from analog FM to digital broadcasting we need a suitable technology that will offer comparable quality, mobile reception, capacity for more radio stations, efficient use of radio spectrum, step by step transition and cheap receivers. There are many digital technologies that are already available for sound broadcasting. Unfortunately, none of those technologies is suitable for a direct replacement of existing analog broadcasting.

Currently there are already many efficient audio codecs that can be used with any digital technology. There are also digital transmission technologies suitable for digital sound broadcasting like T-DAB, DRM and DRM+. DVB-T and DVB-T2 in particular can also be used for radio. All those technologies can provide excellent quality and mobile reception. But this is not enough.

FM radio uses about 250 kHz wide channels. Channel spacing is 100 kHz in most parts of the world and 200 kHz in USA and some other countries. This combination of channel bandwidth and spacing makes it very difficult to simultaneously use analog and digital broadcasting. Therefore, the transition with existing technologies will be difficult. Some partial solutions like HD radio are nothing more than additional data and audio transmitted along main analog carrier.

There are probably only two possible approaches for the digitalization of FM band. Either to find a suitable technology that will satisfy all the above mentioned requirements or to select one technology that is future proof enough and make a totally new frequency plan for fast transition. Currently, the digitalization of frequencies used for FM radio will have to wait for a while.

Tuesday 24 August 2010

Quarks, Big Bang and Large Hadron Collider (LHC)

People are curios. Curiosity is one of the key elements that drives humanity towards the answers about our existence and our future. Since ancient times people have asked themselves about the origins of everything. From universe to the basic elements of the matter. Some people many thousands years ago assumed that if you divide some piece of matter this division must come to an end. This process should end with the basic, indivisible elements that constitute matter--atoms.

In the last centuries many experiments have confirmed that the matter is indeed consisted of some small particles. Scientific approach has contributed to the discovery of various natural and synthetic substances, molecules, chemical elements and atoms. Atoms, once believed to be indivisible, were also found to have some hard nucleus with electrons orbiting around it. Then it was discovered that the atom nucleus is consisted of protons and neutrons. So the atoms are divisible. This fact had many consequences. One of them with the most notable effect is fission nuclear bomb. However, the division story didn't end there. Protons and neutrons were also found to contain some smaller particles--quarks.


Currently the list of all elementary particles is pretty long. This list is part of the Standard model--a model of how everything exists and interacts. It is believed that this model is not the final picture of the universe. There are still some unanswered questions. On the other hand, the universe itself is a subject of investigation. One of the key discoveries was that the universe is expanding. From this fact we can conclude that in the past the universe was smaller. The more we go into the past, the smaller it was. Sooner or later we come to the moment in time where the universe was infinitely small. This is called the Big Bang--the moment when the universe started to develop as we know it today, some 13.7 billion years ago. This is now the leading theory about the evolution of the universe. It is still unknown what banged, how and why.

The latest project to find some missing answers is the Large Hadron Collider (LHC) in CERN, Geneva. It is a giant ring 100 meters under ground where two beams of particles close to light speed will collide. Each collision will produce an enormous amount of other particles. Analysis of this debris will hopefully answer some questions about the nature of particles or even bring some new ones. Because of enormous collision energy (about 14 TeV) the circumstances will be close to the situation immediately after big bang. The LHC project is currently the largest and the most expensive scientific project.

Answering questions about micro and macro world will not only satisfy our curiosity but will also help us to understand the world. If we understand the world then we can make it better. And better world is a dream of everybody.

While doing science and searching for new particles it is a good idea to listen to good music like John Lennon Music. Either with the Beatles or as a solo musician, John Lennon proved that he knew what music is. Visit http://johnlennonmusic.net/ and learn about the music genius.

Monday 23 August 2010

Home Recording Studio Software

There are many professional recording studios around. Famous names, soundproof rooms, fancy equipment and high prices. But you can build a decent recording studio at home. All you need is a suitable quiet place, a computer and recording studio software. The cost of the hardware and software can be as low as the price of a state of the art gaming computer!

To build a professional recording studio at home is not so hard. It is not the equipment that defines professionalism; it is your ambition and knowledge to achieve the goal. If you can afford to dedicate one room for studio purposes then all you need is some simple audio hardware, a computer and software.

The prices of computers can vary. Faster computers with better performances are preferred, but usually are tagged with higher prices. You can also select individual components and build a custom, not-so-expensive computer according to your needs.

You would expect that the most expensive piece of equipment is the recording studio software. Wrong! There are many professional software packages that are also used in professional recording studios and that don't cost a fortune. In fact, they are quite cheap. For a few hundred dollars you can get software with a lot of features, attractive and functional user interface and with functionality to convert any PC or Mac into a powerful recording studio.

There are also many recording software packages available. They all function in similar ways. Some of the most popular studio software packages are Propellerhead Reason, Pro Tools, Cubase, Nuendo, Sonar, and Digital Performer. All of these packages can be used in home recording studio. You need to compare them and check if they support features you are interested in. If you will not buy a new computer then you should check compatibility with the existing one - be careful because some software is only available for either PC or Mac platform.

Usually the first step when building a home recording studio is to define the purpose and to select audio equipment including sound card. The next step could be selecting the computer and software. However, in some cases these steps can be reversed. For example, you are astonished by the capabilities and user interface of the Propellerhead Reason software. In such case the software is already selected. You need a computer to run it, some audio card and probably a cheap MIDI keyboard for your first music experience.

One of the most popular recording studio software is Pro Tools. It is used in many professional studios. It is so popular because it comes (or works) with dedicated, high-quality audio hardware and really covers all tasks in audio recording, editing and mastering. There are three versions of Pro Tools available. Pro Tools HD is designed with the highest quality standards in mind and runs on a state-of-the-art DSP hardware. Pro Tools LE is a medium priced solution and works with many audio cards from Digidesign and M-Audio. And there is also a very cheap Pro Tools M-Powered that can be used with dozens of cheap M-Audio interfaces. The bottom line of all Pro Tools versions is that they all use the same file format. This means compatibility between your home recording studio and any professional studio using Pro Tools.


If you have decided to build a home recording studio you should first take a look at available software. Learn what is possible and start dreaming. Even with a modest computer and cheap software you can start recording or composing music. You will be amazed with all the possibilities you have at home. Soon you will be able to do things that few years ago were only possible in professional recording studios.

You can find more information about software used in recording studios at the Recording Studio Software website which is dedicated to recording studios, computers and software. Here you can read more about Macs, PCs, recording studio software selection, recording studio design, and you can also check supported features in recording studio software comparison table.

Sunday 22 August 2010

Compiler Design - Hash Functions and Tables

The purpose of every compiler is to read the input file in one programming language and convert it to one or more output files. Output file can be a different programming language, object code or executable code. The process of compilation must first examine the input file. This means reading all characters, identifying keywords, expressions, statements and storing all the data into symbol tables for future use. Symbol table is one of the most important data structures in any compiler.

Symbol table stores identifiers and its attributes. Every time the compiler finds a new identifier in the source code it needs to check if this identifier is already in the table, and if not it needs to store it there. This means a lot of searches and comparisons with every symbol table item. Search is always a very time consuming operation. Our goal is to have a fast compiler. Therefore we should find a way to make the searches across the symbol table as fast as possible.

One of the simple yet effective approaches is to use some hash function and to create a hash table. For every identifier in the table we apply the hash function and calculate some number. The hash function is an arbitrary function that for each identifier returns some number. It can be a simple sum of the ASCII codes of the identifier or some more complex one. Then we use, for example, the last 4 bits of this hash value to determine where to search for our identifier. 4 bits of hash value mean that we have 16 different linked lists of identifiers. We search only the list that belongs to the calculated hash value. This means that we only have to search a small list of identifiers which have the same hash value. Using more bits of hash value and consequently having more linked lists means faster search but we need some more space for bigger hash table.


If we don't find our identifier in this list then we can be sure that the identifier is not in the table because all other identifiers have different hash values so they must be different. In such case we simply add the new identifier at the end of the list of identifiers which belongs to the calculated hash value. Using hash functions and hash tables is a very effective way to speed up searches in symbol tables. Hash functions and hash tables are used in almost all compilers because their implementation is pretty simple and the gain in search speed is huge.

There are many excellent books on compiler design. However, the best book on compiler design is the compiler itself. Take a look at Turbo Pascal compiler source code - a Turbo Pascal compiler written in Turbo Pascal. This source code shows all the beauty of the Pascal programming language and reveals all the tricks needed to build a fast and compact compiler for any language, not just Pascal.

Saturday 21 August 2010

Pascal Compiler For 8051 Microcontrollers

The 8051 core is one of the widely used microcontroller cores. It is about 30 years old and still very popular. Originally designed by Intel in late 1970s, 8051 core found its way into many popular microcontroller families manufactured by Atmel, Silicon Labs, NXP, and many others. One of the reasons for popularity of 8051-based microcontrollers is the availability of many excellent compilers, from freeware applications to high-priced professional development tools.

A very popular programming language is C. It is widely used in development of operating systems, desktop applications and embedded systems. Beside assembly language, C is the most popular programming language used for embedded programming. 8051 microcontrollers are no exception. SDCC is the most popular open-source C compiler for 8051 microcontrollers, while Keil (now an Arm company) makes the widely used and popular commercial development tools for 8051 microcontrollers.

But there are also other popular languages. One of them is Pascal. Named after French mathematician and physicist Blaise Pascal, it was developed by Niklaus Wirth in late 1960s. The main objective of this programming language was to teach programming with emphasis on structured programming. Many schools used Pascal as introductory language to teach first steps in programming. Pascal is considered as a high-level programming language. Algorithms implemented in Pascal need little comment since the statements are composed of English keywords that clearly describe in English language what the statement does.

The most popular implementation of Pascal programming language was the Turbo Pascal compiler by Borland in early 1980s. The key to success was a compact compiler which generated executable code and an integrated development environment (IDE) where you could write, run and debug your programs. Turbo Pascal significantly contributed to popularity of Pascal programming language. In 1990s Turbo Pascal evolved into Delphi--a visual IDE for Windows.

In embedded systems Pascal is rarely seen. One of the reasons is probably the lack of Pascal compilers for microcontrollers. There is absolutely no reason why Pascal could not be used in embedded world. Regardless of the programming language used, the output of compilers, either C or Pascal, can be a compact optimized code. On the other hand, there is a plethora of libraries written in C language for any imaginable task. This usually implies using a C compiler for embedded development.

One of the successful implementations of Pascal programming language in embedded world is Turbo51--Pascal compiler for 8051 microcontrollers. It is a command line compiler with Turbo Pascal syntax. If you are familiar with Turbo Pascal then you will be able to quickly start programming for any 8051-compatible microcontroller. Turbo51 supports all memory models in 8051 and generates a compact and optimized code that can run even in memory-limited versions of 8051 microcontrollers.


If you are programming for 8051 microcontrollers and still remember Turbo Pascal then you are welcome to visit http://turbo51.com/ and download Pascal compiler for 8051 microcontrollers.

It is up to you to decide which language you will use. Of course, in many cases this decision is already made because of portability or team development issues. But if you don't check other options you will never know about advantages they offer.

Friday 20 August 2010

DVB-T2 - The Most Advanced System For Digital Terrestrial Television?

There are many different systems or transmission standards for digital terrestrial television. Northern America uses ATSC, Japan and Brazil have decided for ISDB-T, China uses DMB-T, Korea uses T-DMB, while Europe, Russia, Australia, India and many other countries are using DVB-T system. DVB-T is one of the standards of the DVB consortium which uses OFDM as a basis for modulation.

Each of these systems has advantages and disadvantages. And once you choose a transmission standard you don't change that decision for quite some time. Transmission standard for digital broadcasting is one of the basic properties that defines the equipment (transmitters and receivers) that can be used in specific country. Changing the transmission standard means replacing receivers and transmitters (at least the modulator) with new devices.

Probably the only reasonable way of introducing new broadcast technology is with new service. You simply offer new, attractive service which, of course, uses new technology. Anybody interested in this new service would need to buy a new set-top-box and, therefore, will never ask questions about technology and standards. A brilliant example of this approach is the introduction of HDTV in the United Kingdom. UK was one of the first countries that started using DVB-T back in 1997. Introduction of HDTV will bring two advanced technologies: DVB-T2 as transmission standard and MPEG-4 as coding standard. This way nobody will care about the price of new technology because it is the new service (HDTV) they will pay for.


Advantages of MPEG-4 over MPEG-2 are pretty obvious. But what advantages brings us DVB-T2? The DVB-T2 standard was finalized in June 2008. It brings us many improvements and features that increase capacity and robustness of the transmission channel. The most important changes comparing to the "old" DVB-T standard are:

  • Added 256-QAM constellation, each service can have it's own constellation
  • Added 1k, 4k, 16k and 32k modes
  • Added guard intervals 1/128, 19/256, 19/128, (for 32k mode, the maximum is 1/8)
  • Changed Forward Error Correction (FEC) algorithm: Low-density parity-check (LDPC) code + BCH, added code rates 3/5 and 4/5
  • Fewer pilots (8 different pilot-patterns) and equalization can be based also on the RAI CD3 system
  • Added 1.7 MHz and 10 MHz channel bandwidth
  • Multiple-Input Single-Output (MISO) may be used - Siavash Alamouti scheme

In general, DVB-T2 offers around 50% increase in capacity over DVB-T. DVB-T2 together with MPEG-4 form an advanced system for digital terrestrial television broadcasting. It is very likely that this combination will be widely used in the future and it will not be superseded with some even more advanced technology for years. However, the fact is that we live in a world full of surprises.

If you are interested in details of DVB-T transport streams you can take a look at MPEG transport stream analysis of some multiplexes broadcasted in Austria, Italy, Hungary, Croatia, Latvia, Estonia and some other countries.

It is very likely that because of many advantages the DVB-T2 system will become predominant digital video broadcasting system used worldwide.

Thursday 19 August 2010

JTAG - Standard Test Access Port and Boundary - Scan Architecture

When you have an integrated circuit (IC) or many such devices on a board, where each IC has 100 or more pins, it is mandatory to have some tool to test connections and verify operation. Having this in mind, an expert group prepared a specification for testing that was standardized in 1990 as the IEEE Std. 1149.1. It is also known as the JTAG (Joint Test Action Group) standard.

JTAG is a method for testing connections on printed circuit boards (PCBs) that are implemented at the integrated circuit (IC) level. It is very difficult to test complex circuits with traditional in-circuit testers. Because of physical space constraints and inability to access very small components and BGA devices, the cost for board testing has increased significantly. JTAG is an elegant solution to overcome problems with physical in-circuit testers.

With JTAG you can test interconnects between integrated circuits on a board without using physical test probes. This is a big advantage because you don't need any additional customized tool for testing. Of course, the device has to be JTAG enabled. This means an additional cell (a boundary-scan cell) for each pin. Boundary-scan cell can set or read data on each pin. Boundary-scan cells are connected together and the data is serially shifted into the boundary-scan cells. The process is controlled from a serial data path called the scan path or scan chain. This is the basic principle of the JTAG interface.

JTAG eliminates the need for a large number of test vectors, which would be needed to initialize all the devices. Using JTAG means shorter test times, increased diagnostic capability higher test coverage, and lower equipment cost. Although there are many variations of the JTAG header on the board it is possible to use standard JTAG signals with almost any JTAG interface and boundary-scan software.

An additional benefit with JTAG interface is that it can be also used for programming and debugging. Many microcontrollers, FLASH memories, FPGAs and similar devices can be programmed via JTAG interface. And the same interface can be used for debugging. JTAG is a big step toward standard interfaces in electronics industry.


There are many JTAG cables that can be used on more than one device. In fact, JTAG cable is more than a cable. Usually it has some small electronics to boost signals and to provide standard computer interface. The price of the simplest JTAG cable can be as low as $5.

If you are interested in details you can visit http://JtagCables.com and read more about the JTAG standard and various JTAG cables used by the industry.

Wednesday 18 August 2010

I2C Analyzer - Monitor and Control I2C Bus

There is almost no embedded system that has no peripheral devices. Even if the system needs only I/O lines there are many cases where the microcontroller has not enough pins. Port expanders are very popular because you can place them anywhere and you don't have to make long connections from the microcontroller to remote board areas for each I/O signal. Most port expanders and other peripheral devices use I2C bus to connect to the microcontroller.

I2C bus is a simple serial communication bus and protocol. It needs only two control signals, SCL and SDA. Writing software to support I2C devices is easy. Many microcontrollers have built-in hardware support. But even if the I2C is not directly supported you can write few short routines to send and receive data. I2C devices are simple to control so they are widely used even for projects intended for absolute beginners.


If you are developing embedded systems either professionally or just for fun you will sooner or later come into position where some devices will not respond to I2C commands or they won't behave as expected. In such cases there are only two basic approaches. The first one is to add debug routines to monitor I2C communication in order to find the cause of the problem. This approach works very well if the problem is in the software but it is time consuming and you need some skills to investigate data at right places and it doesn't find hardware related problems. The second approach means using external device to monitor I2C communication.

This way you can easily check if the software is sending right commands and if the device is responding properly. One popular method for I2C analysis is a module or extension for some digital storage oscilloscopes. This module enables decoding of I2C messages and displaying a complete communication over I2C bus. This is very convenient but costly debugging method because the price of the DSO including I2C module is beyond reach for many hobby programmers.

Another very popular but still professional way of debugging I2C bus is by using a simple USB to I2C interface. This device listens to the I2C communication and the software running on a PC decodes packets and displays all the data related to the I2C communication. It is also possible to save captured data for later analysis or to set triggers for specific events to capture only communication related to one I2C address.


Another great function of advanced I2C analyzers is the capability to act as I2C master. This way you can send I2C packets and receive responses from slave devices. This is the quickest way to discover what is not working as it should.

Such I2C analyzers are cheap and offer many advanced features also for decoding other popular protocols like SPI, JTAG and UART. They are very practical, can be connected to any PC including laptop and some advanced models offer also basic functions of oscilloscope or logic analyzer. Once you see what packets are sent and how individual devices respond to commands from the software it is very easy to find the cause of the problem. In some cases it will be wrong I2C address that is being used or wrong packet format but it may also happen that the problem is in the faulty device or device connections.

Tuesday 17 August 2010

Turbo Pascal - A Brief History

Turbo Pascal is a famous Pascal compiler developed by Borland in early 1980s. It was the first compiler that included also an Integrated Development Environment (IDE). Because of this it was possible to write the code, compile it, run it and debug it without ever leaving the editor and running other tools. Another strength of the Turbo Pascal compiler was its speed. Comparing to other compilers at that time it was very fast.

Turbo Pascal was developed by Anders Hejlsberg who initially developed Blue Label Pascal and then Compas Pascal compiler. It was available on CP/M and MS-DOS platforms. Borland then licensed the compiler core and added user interface and editor. Anders Hejlsberg joined Borland where he was the chief architect of all versions of Turbo Pascal and first versions of its successor, Delphi. Anders Hejlsberg is now forking for Microsoft where he works as the lead architect of the C# programming language.

The first version of the Turbo Pascal compiler was released in November 1983. It was sold for $49.95 and was very affordable comparing to other Pascal compilers. It generated .com executable files which were also pretty fast. This was a direct consequence of the quality of the generated code by the compiler. Included Integrated Development Environment, fast compilation, fast development cycle (edit, compile, debug), quality of the generated code and affordable price contributed to additional popularity of the Pascal programming language in the 1980s. At that time Pascal was also used as the programming language for teaching in high schools and universities.

The compiler development went on. Later versions introduced a full-screen user interface with pull-down menus, generated .exe files, supported inline assembly instructions and object oriented programming. Many advanced features were added to ease software development. The last version of the compiler for DOS, Turbo Pascal 7, had everything needed to get the most out of a DOS program.

One of the most important contributions to the popularity of Pascal language made by Borland was a clever approach to add some simple extensions of the language which filled the gaps in the standard Pascal. The most important extension was the support for units. Unit is a separate file which could also be compiled separately. Usually a complex program is divided into logical units which make code writing and program development easier. Second important extension was support for strings. Strings are character arrays which can be used for anything not just for characters. Borland also added support for object oriented programming, access to absolute memory locations, support for interrupt procedures, inline assembly instructions, etc.

If you are interested in compiler construction and Turbo Pascal internals you can examine the Turbo Pascal compiler source code. This is not the original source code but a compatible compiler written in Turbo Pascal. This source code can be also used as a great book on compiler design and implementation.

Monday 16 August 2010

Pascal Programming Language

Pascal programming language was created in late 1960s by a Swiss computer scientist Niklaus Wirth. It is based on the ALGOL programming language and named after the French mathematician Blaise Pascal.


Initially, it was intended to teach computer programming because it encourages structured programming and using data structures. This is how the famous "hello world" program looks like:

Program HelloWorld (Output);
begin
Writeln ('Hello, world!');
end.

Definitions for data types and structures are simple and clear. Language provides an orthogonal and recursive approach to data structures. You can declare arrays of arrays (multidimensional arrays), arrays of records, records containing arrays, file elements can be records, arrays, records containing arrays of records, etc. Some examples:

Type
TDay = 1..31;
TMonth = 1..12;
SimpleArray = Array [1..1000] of Integer;
2D_Array = Array [1..1000, 0..3] of Real;
Friend = (Barbara, Alice, Rebecca, Laura);
DateType = Record
Day: TDay;
Month: TMonth;
Year: Integer;
end;

Var
Birthday: Array [Friend] of DateType;

The first major milestone in Pascal history was Turbo Pascal. Based on the compiler written by Anders Hejlsberg, the product was licensed to Borland, and integrated into an IDE. It was also available on the CP/M platform, but the biggest success Borland achieved with Turbo Pascal for DOS. Borland continued the tradition of successful Pascal compilers with Delphi. Delphi is a visual rapid application development environment using Object Pascal as programming language. There is also an open-source compiler available: Free Pascal. It is a 32 and 64-bit compiler for various processors like Intel x86, Amd64, PowerPC, Sparc and ARM. Pascal is also used for development of embedded systems. Compilers are available for microcontrollers like 8051, AVR and ARM.

In 1983 the language was standardized as IEC/ISO 7185. In 1990 an extended standard was created as ISO/IEC 10206. The ISO Pascal is somehow limited because it lacks some features like strings and units. The most known and used syntax is the Borland Turbo Pascal syntax which added necessary features to fill the gaps in the ISO standard. There is also a derivative of Turbo Pascal known as Object Pascal (used in Borland Delphi) which was designed for object oriented programming.

Because of it's popularity among many programmers some older versions of the Turbo Pascal compiler are now available for download.

Today, Pascal is still popular in various areas but not that much it was decades ago. It was replaced mainly with C which is available for almost any platform. Nevertheless, there is still a large community that finds Pascal programming language an excellent choice. It is easy to learn and easy to read. If you give your identifiers meaningful names you can read the program almost like a plain English text. Therefore it is very easy to transfer any algorithm into program. In addition to this the language is not case sensitive which is another step closer to English language.

There are endless debates and comparisons of Pascal and C programming languages. Some favor Pascal, other like C. There is no winner. Both languages are used to describe an algorithm. It is up to the programmer to choose his preferred language.

Digital Television Broadcasting and Single Frequency Networks

When we talk about digital broadcasting we usually talk about better picture and sound, high definition television, MPEG compression and more choice. However, most people are not aware of technology that lies behind the digital broadcasting and what are the advantages of digital technology other than better picture and sound.

The roots of transition to digital television broadcasting lie in more effective use of radio-frequency spectrum. In analog television world we have radio-frequency channels (frequencies) where each frequency transmits one TV channel (program) and in order to avoid interference the same frequency can be used again only far away. Digital technology enables us to use advanced compression algorithms to compress audio and video signals, consequently we can use one frequency channel to transmit more than one service (usually three to ten and even more TV channels), and we can build a network of transmitters operating on the same frequency thus significantly lower the number of frequencies (channels) needed to cover a territory.

There are a few standards for digital television broadcasting. For terrestrial television countries use systems like ISDB-T, T-DMB, ATSC and DVB-T. DVB-T is maybe the most widely used - it is used in Europe, Russia, Australia, India, and many other countries. All these systems are based on COFDM - Coded Orthogonal Frequency Division Multiplexing. This is a modulation scheme with many thousand closely spaced carries, each carrying digital information.

Frequency plans for DVB-T are based on allotments - areas where all transmitters transmit on the same frequency. How can we transmit on the same frequency without causing interference? Well, there is interference but within some limits it is constructive. It helps in demodulating the signal. The fact is that at any point signals from different transmitters arrive at different times. But since the signal is digital the same signal is received. The length of the symbol - the digital data with which every carrier is modulated - is longer than the difference of arrival times from different transmitters. In addition to this each symbol is prolonged with a guard interval. During the guard interval the same symbols with varying arrival times can be received without any inter-symbol interference. This is the basic principle of Single Frequency Networks (SFN).


Of course, the maximum distance between transmitters operating on the same frequency depends on the length of the guard interval. With proper planning of SFNs the distance of 70 km can be achieved. This means that all transmitters in this area are operating on the same frequency and are broadcasting the same content. This is a huge advantage over analog television where we would need many frequencies to cover the same area.

It is obvious that digital broadcasting has many advantages. Digital networks need to be synchronized to operate properly in SFNs and this can be sometimes tricky. It is also true that the cost of digital network is high but it is divided among many services. And the most important fact is that the cost of the digital network is significantly lower than the cost of the most valuable resource - the radio-frequency spectrum.

More about transition to digital terrestrial television and DVB-T multiplexes in European countries can be found on this interesting website.

Recording Studio Software History

When computers found their way into homes they were used for every possible and imaginable task. Audio recording was not an exception. First music software applications were promising but from today's perspective they were very modest. This was due to limitations of computers at that time. Now we have two main streams of personal computing: PC and MAC. Both are used in professional recording studios with plethora of complex software applications.


The 1980s was a very important decade for music production and recording. MIDI started to emerge, Yamaha introduced the DX7 synthesizer, some samplers like Akai S1000 were very poplar, and first music software applications were written for microcomputers popular at that time. Various software sequencers were written for Commodore C64, Sinclair ZX Spectrum and Apple II. A real breakthrough was Macintosh with graphical user interface. It had widows with icons and a mouse pointer. Mark of the Unicorn developed Performer, the first sequencer for Macintosh.

For the history of MIDI sequencers Atari ST was also important. Designed as a gaming computer with graphical user interface it featured also MIDI I/O and it was cheaper than Mac. Steinberg Cubase and Emagic Notator were first developed for Atari ST.

First PC software applications were Cakewalk MIDI sequencer and the SCORE music notation package. However, at that time PCs with first Windows were not so stable as these days and many musicians preferred Mac for which CODA's Finale software appeared at the end of 1980s. Cubase and Notator were also ported to Mac and PC platform.

In 1989 Digidesign introduced one of the first hard disk audio recording systems Sound Tools. It was a two-track recorder/editor used with Q-Sheet software. In 1990 the first MIDI and Audio sequencer was introduced. It was Opcode's Studio Vision and used Digidesign's Sound Tools hardware for audio. 4-channel Pro Tools appeared in 1992. There was also one not so popular microcomputer, the Acorn Archimedes with an interesting software called Sibelius. It was a score writing package which was also ported to Mac and PC. Later in the 90s Cubase VST (Steinberg ) and Logic Audio (Emagic) both implemented the notation features.

Computers became faster with more RAM and disk capacity so the next trend was multi-track recording. Steinberg worked on MIDI + Audio sequencers like Cubase VST (Virtual Studio Technology). Third-party developers welcomed the plug-in feature and a new market emerged. Emagic and Mark of the Unicorn also accepted the plug-in approach. In 1990s Pro Tools introduced 64-track system MIX with 16/24 bit audio at 44.1 or 48 kHz. At that time Cubase VST, Logic Audio and Pro Tools were all available on the PC platform.

In 1999 Steinberg introduced Nuendo. It offered 96 kHz recording and 5.1 surround audio. Pro Tools offered surround audio in 2002. At that time Pro Tools became a standard for professional recording studio software. Pro Tools 5.1 proved it's capability of recording MIDI sequences and audio tracks. It's user interface was simple and powerful for either recording, editing or mixing audio. At the same time Logic Audio was the most popular sequencer on the Mac platform. Digidesign introduced Pro Tools HD (sampling at 96/192 kHz) in 2002 when new operating system for Mac, OSX become available. Cubase SX and Logic Audio were also released for OSX. Pro Tools 6.0 for OSX become available in 2003.


Some ownership changes also occurred: Digidesign was acquired by Avid, Sony acquired Sonic Foundry, Emagic was acquired by Apple, Adobe acquired Syntrillium's Cool Edit Pro software and changed its name to Adobe Audition, and Steinberg was acquired by Pinnacle. Now every leading recording studio software runs on both popular platforms, PC and Mac. And stability is not an issue anymore.

One of the big players in professional audio recording is still Digidesign's Pro Tools. There are actually three flavors of Pro Tools, all of which share the same user interface and file format. The primary distinction is the hardware they complement. Pro Tools|HD runs on elite DSP-powered Pro Tools|HD hardware and is mainly used in professional environments, Pro Tools LE used in home studios works with a variety of Digidesign hardware including the Mbox 2 family and Pro Tools M-Powered delivers even more options via compatibility with dozens of M-Audio interfaces. Some audio engineers, producers and remixers use Pro Tools hardware with third-party software instead with the original Pro Tools software.

Computers and software in music recording and production are inevitable. We can hardly imagine working with analog tapes and mixers. Digital signal processing has raised audio technology to a new level. Personal computers have evolved to a level where everybody can afford a home recording studio. Cheap hard disks allow us to record unlimited number of tracks at arbitrary sample rate. Music recording has never been easier. There are also some disadvantages with this new technology. You can easily compress music and make it louder destroying the original dynamic and life it originally had. CD clipping is also very popular. However, the advantages of using computers in recording studios are huge. You only need the right software and some skills.

More about computers and software used in recording studios can be found at Recording Studio Software website. Here you will find also descriptions and previews of books about recording studios and recording studio software comparison.

The Difference Between Analog and Digital Television

What is digital television? When we talk about digital television we usually mean digital television broadcasting. Digital television broadcasting can use different platforms: cable, satellite or terrestrial. Each platform uses different transmission system. How can we receive digital signals? We need a digital receiver, either integrated in the TV set or a stand alone set-top-box connected to the old TV set.

But there is not one single standard for digital television broadcasting. For terrestrial digital television broadcasting there are four major and incompatible transmission (modulation) standards. For example North America uses ATSC, China uses DMB-T, Japan and Brazil use ISDB-T while Europe, Russia, India, Australia and many other countries use DVB-T. In addition to this there are many codecs (algorithms) that can be used to compress audio and video: MPEG-2 and MPEG-4 are the most commonly used compression standards for video. This means that you need a digital receiver compatible with the transmission standard and codecs used in your country.


But this variety of standards is nothing new. We had similar situation in analog television. The following parameters in analog television broadcasting can have different values:

  • Number of lines
  • Frame rate
  • Channel bandwidth
  • Video bandwidth
  • Audio offset
  • Video modulation
  • Audio Modulation
  • Color system
This means that you had to have a TV set compatible with the standard used in your country to be able to watch TV. However, almost all recent analog TV sets are able to receive and display common standards used worldwide. These analog standards were defined many years ago, they were not modified and no new standard for analog television was added. This meant a stable situation for decades.

Now this has changed. In the digital world it is so easy to invent a new method or algorithm, better and more efficient comparing to the old one. A typical example is the transmission standard DVB-T. It has a successor DVB-T2 which is incompatible with the old DVB-T standard but brings more capacity, robustness and flexibility. MPEG-4 is also a newer and better compression method comparing to MPEG-2.

This means that digital technology will continue to develop and new, better and more complex (incompatible) standards will come. A practical consequence of this rapid development is that you will have plasma or LCD TV set to display picture and a separate set-top-box compatible with digital standards used in your country. The TV set will probably last 7 years or more, while the set-top-box will have much shorter life.

If you are interested in implementation of digital terrestrial television you can check the website of Igor Funa where you can get many practical examples of DVB-T broadcasting in many countries and you can also examine a DVB-T multiplex transport stream analysis in real-time.

Sunday 15 August 2010

Switching to Digital Television Broadcasting

We are now faced with a transition to digital broadcasting. But what does it actually mean? Digital technology was already present in radio and TV production twenty years ago. Now it is time to implement it in transmitters, one of the final links toward listeners and viewers who depend on terrestrial reception. Digital television is a broad term. We should distinguish between digital production (making of the program) and actual broadcasting - transmitting signals to our rooftop antennas. It is possible to produce TV program with the latest digital technology and transmit it in analog or to produce in the old fashioned analog way and transmit it in digital. So be careful when you talk about digital television.

If you don't know what is the difference between analog and digital signals, take a look at this simple example. You can tell the size of your TV screen in two different ways. If you show the size with your hands then this is an analog value. The size of screen is represented with the distance between your left and right hand. This distance can be arbitrary small and the value (in this case the distance between hands or screen size) can be any value between zero (both hands together) and the maximum span of your hands.

To tell the screen size in digital you would measure it with meter tape or ruler by noting the number at the end of screen and rounding that number to some appropriate number of decimals. This number now represents the digital (numerical) value of the screen size. What you have done is actually an analog to digital conversion. Now you have a number that represents the size of the screen. Since you have rounded the number it is not an exact value of the screen size but a value that is sufficient for most purposes. To measure exact value it would take an infinite number of decimal places which is impossible to achieve and usually we don't need that exact number.

Why are we switching to digital television broadcasting? The main reason is more effective use of the radio-frequency spectrum. In analog world each TV channel (program) occupied one frequency (radiofrequency channel). The bandwidth of this channel is between 6 MHz (USA) and 8 MHz (Europe). The amount of radio-frequency spectrum available to television broadcasting is limited. Therefore there is a fixed number of channels (frequencies) that can be used for television broadcasting. We can reuse these frequencies if we can ensure that transmitters transmitting on the same frequency are far enough to avoid interference. Many countries have used all frequencies assigned to them and further expansion of television stations with terrestrial broadcasting was not possible. A new technology had to be developed. And it was. Audio and video signals are digital and can be effectively compressed with various compression methods. Digital broadcasting can use one frequency channel to broadcast a package of compressed television, radio and data services called multiplex.

Different digital standards have been developed. ATSC is used in North America; DMB-T is used in China; ISDB-T is used in Japan while Europe, Australia and many other countries have decided to use DVB-T. Each of these standards transmits a stream of digital data. This transport stream contains compressed audio and video. There are two popular codecs or compression algorithms (standards) used for compression: MPEG-2 and the newer, better standard MPEG-4. All these standards are incompatible and you need a digital receiver (set-top-box) which is compatible with transmission and compression standards used in your country.

What brings us digital television broadcasting? More channels, better picture and sound quality, possibility for high definition television, wide screen picture, multi-channel sound and new services. But don't buy a new plasma or LCD TV if you already have a working TV set at home. Get a digital set-top-box and enjoy new services with minimal cost.


If you are interested in technical details of digital terrestrial television you can take a look at the DVB-T transport stream analysis and status of the DVB-T in Slovenia, Austria, Italy, Australia, Latvia, Estonia, and some other countries.

Compilers For 8051 Microcontrollers

Most computers in use today are embedded in some electronic device, from appliances to mobile devices. Such computers are called embedded systems. A key component in embedded system is a microcontroller. This is a microprocessor with emphasis on I/O operations. The role of microcontroller is to control electronic devices providing all necessary switching, measurements and communication with the world. Microcontroller is the brain of the device.

The first microcontrollers emerged in 1970s. They were 8-bit devices capable of running a program from internal ROM or external EEPROM memory. One of the popular microcontrollers was Intel 8051. Intel developed a family of microcontrollers named MCS-51, where 8051 was probably the most popular member. It is amazing that this architecture is still popular today. There are many manufacturers like Atmel, Silicon Labs and NXP that still use the 8051 core for their microcontrollers. This means that all the tools that were developed thirty years ago can, at least in theory, be still used today to develop programs for 8051-compatible microcontrollers.


The 8051 family of microcontrollers is special because it has many types of memory that need special instructions to access it. The basic memory is located in the DATA segment and most instructions can access it. The size of this memory segment is 256 bytes. The upper 128 bytes are reserved for Special Function Registers, memory mapped registers that control functions of the microcontroller. The lower half is divided into three parts. Addresses from 0 to 31 are memory addresses for registers R0 to R7 in 4 register banks (0 to 3). Addresses from 32 to 127 are general purpose memory locations with additional function of addresses from 32 to 47 which are also bit-addressable. Most 8051 microcontrollers (except the original 8051 microcontroller) have also additional 128 bytes of IDATA memory which is similar to DATA memory but can only be accessed indirectly. It is located at addresses from 128 to 255. Special instructions are needed for this type of memory. 8051 microcontrollers support also external memory located at XDATA segment. Its size is 64kB and can be accessed only indirectly with few special instructions. There is also a special bit-addressable memory where you can access individual bits (256 bits in total). The first half of this bit-addressable memory is general purpose memory while the upper half maps to bit-addressable special function registers.

This complicated memory model makes compiler construction a complicated task. But because the 8051 microcontrollers were well accepted in industry and they are also present in many hobby projects, quite a lot of companies decided to develop their own 8051 compiler. There are many commercial C compilers for 8051 microcontrollers available. Most of them are part of some commercial package together with integrated development environment (IDE), debugger and simulator. Among those the Keil IDE/compiler is probably the most popular. There is also one popular and free C compiler SDCC (Small Device C Compiler). SDCC is a retargettable, optimizing ANSI - C compiler that targets the Intel 8051, Maxim 80DS390, Zilog Z80 and the Motorola 68HC08 based microcontrollers. SDCC is Free Open Source Software, distributed under GNU General Public License (GPL).

The other hemisphere in 8051 programming is Pascal. Pascal programming language was designed by Niklaus Wirth in late 1960s. Its main purpose was to teach programming. The language itself is focused on structured programming and has many constructs for data structures. Borland Turbo Pascal was probably one of the most successful Pascal compilers around. It was very popular in 1980s and early 1990s. Its successor was Borland Delphi which is a visual rapid application development tool still in use today. Pascal is rarely used in embedded programming although there are compilers available also for AVR, ARM and PIC microcontrollers.

There is probably only one commercial Pascal compiler for 8051 microcontrollers, the KSC Pascal51. This clearly shows the market of C and Pascal compilers. However, there is also Turbo51, a free Pascal compiler for 8051 microcontrollers. Turbo51 is a fast single pass optimizing compiler with Borland Turbo Pascal 7 syntax and advanced multi-pass optimizer.

Similar situation is also with other popular microcontroller families like AVR, ARM and PIC. A plethora of C compilers are available and only a few Pascal or Basic compilers. Of course, we should not count assembly language compilers. These are the basic compilers for every processor or microcontroller. The most important thing with compilers is the code they generate. This should be highly optimized for size and speed. Comparing the code size of a program written for the same task in SDCC and in Turbo51 we can conclude that the result is not dependent on programming language. Either C or Pascal can be used to create a compact and optimized code. It is just the personal preference of the programmer what language he will use.

It is very interesting to compare generated code for some particular C or Pascal instruction, e.g. for 8051 microcontroller. We get practically the same microcontroller instructions. This is another confirmation that high level language is only a tool to describe the algorithm. It is the task of an optimizing compiler to generate individual microcontroller instructions that do what is needed.


Microcontroller market will continue to grow. One of the most important markets for microcontrollers is the automotive industry. It is the a driving force in the microcontroller market, especially at it's high end. Many microcontrollers were developed specifically for automotive applications. On the other hand almost every electrical device from toys to appliances uses some form of embedded system. The conclusion is that with microcontroller market growing also the need for high-quality embedded compilers will grow. This will lead to better compilers that will generate faster and more optimized code.

If you are programming for 8051 microcontrollers and like Pascal programming language then you should check Pascal compiler for 8051 microcontrollers.