Category Archives for Apple
Today Engadget posted a rumour (from AppleInsider) that the next iPod would have direct audio input, using a built in MPEG-4 or AAC encoder. If this is true, then I’d seriously consider this the MiniDisc killer, which would be amusing considering every one of the couple of hundred MP3 players released in the last 12 months were supposedly iPod killers.
For background on MiniDisc, see this post of mine from earlier on.
What does audio input give us? Well, you can plug in a condensor microphone for starters, and do away with other personal recording devices. You could also plug in the outputs of other equipment like home electronics, handheld devices, concert mixing desks for bootlegs etc. And the beauty of iPod is that you just take it home and everything gets sucked out into iTunes, which you can then drop into an audio app of your choice, edit, mix and burn and you’re done. Using a professional boom mic, you can record high quality sound to the iPod and transfer it directly into Final Cut Pro or iMovie.
DJs are already replacing CD collections with iPods, and it won’t be long, assuming the audio input rumour is true, before we’re able to mix our own audio at any time and place we wish. Random access, digital, high quality audio, directly transferable to and from Mac and Windows, software upgradable sound quality, and a USB/Bluetooth connection. I’m sorry, but that’s a MiniDisc killer. No wonder Sony are suddenly releasing so many devices based on hacked MiniDisc technology, as they’re about to have 15 years of technology development made redundant virtually over night. A classic example of product panic. You probably won’t believe me, but I actually love Sony products, and most of my home electronics equipment is high end Sony, but aside from a period of about 4 years where it was relevant, MiniDisc is a flawed late 1980s technology that I at least won’t be sad about seeing disappear. Goodbye and good riddance.
Today Engadget posted a rumour (from AppleInsider) that the next iPod would have direct audio input, using a built in MPEG-4 or AAC encoder. If this was true, then I’d seriously consider this the MiniDisc killer, which would be amusing considering every one of the couple of hundred MP3 players released in the last 12 months were supposedly iPod killers.
A little history of MiniDisc is probably in order. Back when Sony and dutch company Philips invented CDs, we suddenly had digital audio in our lounge rooms, cars and even in our Walkmans. This was a great money spinner for Sony and Philips, not because they could sell CDs, as Sony wasn’t actually in the music business at that stage, but so they could sell their manufacturing plant technology and the compact disc certification mark to the consumer electronics companies. Only Sony and Philips had developed the CD manufacturing technology, electronics company were required to license the playback LASER technology from them, and the record companies were required to pay for the privilege of having that little compact disc logo on their product. This is why the current CD DRM technologies which prevent digital copying of CDs has Philips a little frustrated and Sony in a bit of a schizophrenic quandary, because the DRM doesn’t actual conform to the Sony and Philips standard, and therefore cannot use the compact disc logo, which ultimately means they don’t have to pay for it either. Sony of course is now in the record business, having bought Columbia Records back in 1989. Also, you can tell the difference between the Philips and Sony manufacturing, due to the see through plastic centre on a CD, which is clear for Sony and opaque for Philips. But I digress.
The problem with CDs, and why people were still buying cassettes, was because the CD was read only, and home equipment that could manufacture a CD seemed a long way away, until of course Pioneer invented the technology to do it. At least my memory says it was Pioneer, so I may be wrong. In fact I searched the CDR FAQ and I couldn’t find a reference to it, but I’m sure if you email the maintainer, Andy McFadden, who is also an old Apple IIer like me by the way, he’ll track down the answer for you.
So to plug the gap, Philips invented the Digital Compact Cassette (or DCC), a digital version of the old stereo cassettes we knew and loved, which made sense, considering they had also originally invented the cassette to begin with. Digital audio, in it’s raw form, is simply a series of values representing the position of a waveform over time, in the case of CD, 44100 samples per second at 16 bit resolution. 44100 or 44.1KHz was chosen because the maximum frequency that our ears can hear is around 22KHz or so, and 44.1KHz gives you at least two samples per wavelength, at that frequency, which should represent as close enough to the positive and negative amplitudes of the wave for playback. The original Fairlight music computer sampled at 50KHz by the way, and DAT tape, while variable, is able to sample at 48KHz, which is why DAT is still so popular. These samples are called PCM or Pulse Code Modulation, and are the basis of digital audio. Anyway, in order to store the huge amount of data required to store digital audio, Philips came up with a technology called PASC or Precision Adaptive Subband Coding. The basic idea was that you chop the incoming audio into a dozen or so frequency (or subband) bands, ranging from low bass sounds up to high 22KHz, removing sounds which probably can’t be heard from each band, and then joining them back together again. This effectively compresses the data, but it is of course lossy, so every time you record with it, you’ve lost data from the original waveform. However this was fine, because by designing it to effectively be good for only one generation of copying, you have a built in DRM. The problems with PASC, were that the bands were divided equidistantly across the spectrum, whereas sound is inherently logarithmic. This meant that the lower bands actually represented more perceptive range than the upper bands. Perhaps this was supposed to address the compression of harmonics problem, but I’ll come to that a little later. Anyway, DCC failed. It wasn’t random access, so you still had to fast forward and rewind, the PASC obviously wasn’t ideal for home taping, and the audio head was still using metallic particles oriented via magnetic forces (the same as in standard cassette tapes), to simulate a purely digital recording format. With these type of recording heads, like in standard cassette tapes, the tape itself rubs against the head, causing both the tape and the head to wear down. But the big reason it failed, was because of Sony.
Sony came up with MiniDisc, using a similar analog head rubbing against metallic particles technology, but developed it as a rotating disc, giving them random access capability like a CD. They also developed LASER based guidance for accuracy, so that the analog read/write head could use more closely spaced tracks and subsequently store more data. Sony also developed their own compression scheme called ATRAC or Adaptive TRansform Acoustic Coding, which works similar to PASC, but divides the signal into 52 logarithmically divided subbands instead, giving each band equal importantance in the spectrum of hearing. Having killed off DCC, Sony is still flogging this 1980s based technology as modern audio equipment.
The big flaw in PASC and ATRAC is the fact that sound, particularly in music, is based on harmonics. A simple note played on a guitar for example, such as A, which is 440Hz, isn’t just 440Hz, it also generates harmonics at doubling intervals, so 880Hz, 1.7KHz, 3.5KHz etc. The problem is that these harmonics fall into different subbands when compressed, and may or may not be removed if the compression feels like removing them. So, pull out a couple of harmonics, and you end up with a more echoey or thin presence of the sound. This is the basics of why MP3 and the rest is so crap at low sampling rates. The importance of harmonics tends to be lost on technologists, which is why audiophiles still love vinyl, and a lot of professional recording is still done in the analog domain.
MiniDisc and DCC use lossy compression, same as MP3, AAC and MPEG-4, they’ll all degrade through successive generations of copying. That’s why the record companies aren’t completely up in arms about this, because most audio luddites will rip music at some really low encoding speed and resolution, which makes it sound tinny and echoey, and won’t realise how bad it sounds. A recent article by Jupiter Research claimed that with personal devices, particularly MP3 players, increasing their storage, there was limit at which people would want probably no more 1000 songs, and therefore was just increasing memory size for the sake of publicity. What they fail to realise is that increased disk storage actually means the capability to finally return to raw non-lossy PCM encoding for much higher quality audio. I can finally toss that 1MB song away, and have a perfect digital copy at around 60MB instead. As bandwidth and storage increase, lossy compression such as MP3 will become a distant memory and a short 20 year period in history, which we’ll look back on with melancholy.
Now, where was I? I don’t believe I’ve remembered all this crap. Oh yes, the new iPod, the MiniDisc killer. This needs a new post.
Every few years, I get into an argument a discussion with someone about why Apple’s platforms are inherently better designed for users and usability than competing platforms, whatever the domain. So far the iPod seems to be the exception to the rule, but not excessively so.
The problem is that in most cases you can’t really argue the point, particularly to a Windows or Linux fanatic, because their reasons for liking their preferred platforms typically bear no resemblance to usability. Although all three of these computing platforms are moving closer together, the step from the Windows to Mac desktops is still at least as far as from the Linux to Windows desktops. Anyone upgrading, from a Linux to Windows desktop, and I choose the term upgrading intentionally, is more often than not amazed at the new found usability and consistency, so their argument is that anything more would be simply nit picking, or purely subtle or academic improvements. I’ll liken that to the person upgrading from a horse and buggy to a Model T Ford, not realising that the Mercedes-Benz S55 AMG would probably make their driving experience a lot more pleasurable. Please note that I’ve played fair by resisting the obvious stereotypical Ferrari comparison.
But ultimately, a 15 minute argument isn’t going to convince a Windows desktop nut, who is an expert in Outlook 2003’s weird arsed assortment of UI controls and who has already decided to have an argument about desktop usability, that Apple designs are better. The best you can probably do, is use that old chestnut of pointing out roughly how much they don’t understand about UI design, and then let them feel a little inadequate for a few hours. Because if they did understand it better, or knew how much they didn’t know, they certainly wouldn’t have started such a dumbarse argument in the first place.
I recently bit the bullet and moved my Windows task bar on my work machine to the left side of the screen, to match both my home Windows box and my Mac OS X dock setting. It reminded me of Bruce Tognazzini, who amongst other things spent 14 years at Apple and founded their Human Interface Group, and as far as I’m aware the only computing company at that time to have a group dedicated to defining and enforcing the rules of user interaction with a computer, or at least desktop GUIs. My task bar change was instigated particularly because of Fitt’s Law, which I was reminded of recently while using some Windows application which forced me to do everything in little task steps through the main menu bar, causing my hand to go partially numb. Fitt’s Law, amongst other UI basics, is better described for UIs by Tog. In fact, reading through that page reminded me how much there is that you need to know before you can make intelligent UI decisions, and how much of the theoretical stuff you consciously forget over time. It frustrates me when I can see a broken UI, but can no longer argue why it is broken.
Anyway, I wasn’t planning to go into a long rant about interaction design or how good Apple are, because, yes you guessed it, like that’s going to convince you, right? The point of this post, before it went astray, was to highlight Apple’s possible new MiniDisc killer. In fact, because I’ve wasted so much space, I’m going to move it into a separate post.
CNet continues their usual negative Apple spin in saying Apple’s Jobs nixes iPod partnerships. Well, no he doesn’t, he just “nixes” RealNetworks wanting the iPod to support RealAudio. You remember RealAudio, the proprietary audio steaming format that isn’t free and doesn’t work on Windows or Mac default media players so you have to download a client to listen to it. Yep, how stupid of Apple not to want to degrade their wonderful product by propping up a bunch of has beens and their proprietary and dying media format. That’s a no brainer, and CNet should know better.
Due to the fabulous technology available to us in this modern age, I regularly synchronise information between home and work via the habitat that is email (I’d hyperlink link the research paper, but I’ve lost it). I just love trawling through those extra emails I get from myself each day, letting them clog my inbox, and forcing me to disconnect and reconnect my thought patterns while in a completely disparate environment and usage context. Well, I don’t really, but an ex-girlfriend once told me that sarcasm was the lowest form of wit, supposedly paraphrasing Shakespeare, so who am I to deprive her of a degree of satisfaction. Although, I couldn’t find any trace of him ever saying it, and she’s the one with a masters in English lit. It would be ironic if irony weren’t supposedly the highest form of wit, which would of course render her claim rather useless. I prefer witless, but I digress…
So I’m trawling through my emails, and get to this one from a few days ago titled: BITCH ABOUT U.S. ENGLISH IN WORD!!!!!!!!!!!!!!1. Not sure where the “1” came from, but considering the mood I was in, I probably have a good idea.
OK, so all that just so that I can bitch about U.S. English in Word. Well, U.S. based readers wouldn’t have the problem that we have, in that Australian English is different to U.S. English. Not only is our spelling correct different, like colour, analyse and arsewipe, but so is the use and interpretation of the language.
In Word 2003, at least in the Windows version anyway as the Mac version doesn’t seem to have a problem, any document you write, will have “English (U.S.)” in the status bar. You can click this to change it to the default “English (Aus)” but it typically changes back to U.S. when you least expect it. Sure, you can set it in the Options… dialog, but again, intermittently it will revert back while editing a document. The closest you can get is by selecting all in the current document, and then changing the default to Australian. This will give you at least 15 minutes of writing before Word again starts suggesting U.S. spellings for words.
Now typically we’d write this off as a simple bug, but it has been in every version of Word that I can remember since different language dictionaries were available. You’d think their test team would have picked it up by now.
But not only does Word think that U.S. English is the only English, but I’ve posted here before about how our only Australian dictionary, the Macquarie (which I’m so frustrated with that I refuse to inline hyperlink it), is now pay only.
I guess with Microsoft having analyzed the situation, and the Macquarie sniffing the color of our money, we’re all destined to end up speaking U.S. English. Asswipes.
Updated 17th June 2004: Due to guilt and high Google traffic, I’ve now provided a solution. Enjoy.