Category Archives for Rant
A few days ago, the SMH showed a picture of Nick Riley, supposed secret lover of Joanne Lees. The interesting thing wasn’t the story or the photo itself, but the fact that the SMH put the following quote under the photo (I’ve highlighted the interesting bit).
Nick Riley pictured in Sydney in 2001. WARNING: this photograph is protected by copyright. Republish it and you’re in big trouble.
They’ve subsequently removed the warning, but I kind of like the cut of their jib, to quote Blackadder.
The questions are, was this a mistake and they’ve now fixed it? Was it to cover them while they had exclusivity of the photo, and now it’s not so much of a problem? Is my posting of a copy of the photo (which I’ve blurred out to cover my arse) against the law, even though I’ve covered my arse with the aforementioned blur? Does my linking everything back to the SMH site validate my use of the photo, or the blurred copy of it anyway? Or, is our expected formal language changing, and are the SMH embracing this to speak to us in conversational language we’d expect down the pub?
I love CSS Vault, I really do. The last six months they’ve really gone from strength to strength, and now people consider them the ultimate CSS gallery site.
But why does every single site in their gallery end up being 105.2 pixels wide? Well, I’m exaggerating ever so slightly, but the whole point of HTML is that the browser renders the site as it thinks it should look, and the user can adjust their window size to whatever they feel comfortable wth.
For example, I have my work machine set to 1600 wide, and all my browser windows are, you probably guessed it, 1600 wide. Want to know what a typical “fully CSS compliant” web site looks like on that kind of set up? Crap, basically. Imagine if you had a large six foot wide whiteboard to play with, but you were restricted to using only the middle 30cm or so.
Don’t get me wrong, all the sites in the CSS Vault gallery look absolutely fabulous, but why do they all have to be 98.4 (slightly exaggerated) pixels wide ?!
I figure the reason is that developers will always abuse what they’re given. Remember the IE custom support days, remember the table as layout hack days, and now remember the CSS as fixed layout days. A lot of work goes into these standards (HTML and CSS) to make them completely browser independant, or at least malleable enough for user personalisation, so why don’t web developers understand the spirit of the standards? Do they think every browser window on earth is either 640, 800 or 1024 pixels wide? I think I’ve seen browser windows in every single pixel size from 300 to 1600. That’s the reality I’m afraid, and forcing a 927 pixel wide window user into only seeing 800 wide is just plain stupid. It is the user for which the site exists, so why not do them a favour, and instead of showing off how talented graphic designers are and how ignorant they are about the actual site content, why not do something for their audience for a change?
My window is 1600 pixels wide, I do it for a reason, and if I see another web site which uses only a couple of hundred pixels, then claims to be super standards compliant, I’m gonna go nuts! In fact, I might even start a new CSS Vault, and call it Skinny Vault, with the URL www.useless.skinny.websites.com. Know of a useless skinny web site? Enter a comment on this post, and together we can rid the web of these graphical dunderheads once and for all.
We’ve been saying it for years, but the Internet has a lot to answer for. Here’s a few reasons why.
Fermilab is a U.S. laboratory for research into high energy particle physics, second only to Europe’s CERN (European Laboratory for Particle Physics), but owns the world’s highest energy particle accelerator and collider, the Tevatron. Basically they accelerate protons and antiprotons and smash them together to try and identify their content. I won’t get into gluons and bosons, but you can check their site if you want to know more. Anyway, a current project of theirs is to measure the history of the expansion rate of the universe by photographing the dark energy which makes up 70% of the universe’s mass. To do it, they’re using a CCD (Charged Coupled Device), which is the same technology used in consumer digital cameras, of 500 megapixels. Compare that to the current 5 megapixel you’ll get at the local camera store. (via slashdot)
Apparently the other night, 64 million americans voted for American Idol. Compare that with November 2000, when 99 million voted for members of the U.S. House of Reps and 105 million voted for presnit G.W. (via GLOBALIZE THIS!)
Here’s a story about Washingtonienne, a Washington based staff assistant’s anonymous sex exploits blog, which was saved from the trash heap by a caring individual after it was dumped; Wonkette, who you may know, outed her; and the Washington Post article giving the background as well as an intereview with her after she was fired for the blog. This would sound like a regular sex scandal with a bit of high tech mixed in if it weren’t for the fact that she started the blog on 10th May and was fired and moving to New York by 23rd May.
Finally, we now have a TV show in Sydney called Mondo Thingo with Amanda Keller, original Towards 2000 member and part time breakfast radio comedienne, which I caught the last 10 minutes of tonight. Basically it is a TV version of Boing Boing and similar web sites. Not a copy mind you, but the same cultural and weird kind of stuff.
So what do all these have in common? Not too much really, but for Fermilab, before the Internet (and arguably New Scientist, which I did read back in the pre-Internet days) I wouldn’t have known how a CCD worked, who Fermilab were or why they were looking for dark energy, but more specifically, most sites are just taking this for granted and the amusing part is that they have a 500 megapixel camera. Not because that’s unbelievable, but because us geeks would like to have one.
Before the Internet who would have cared that 64 million americans voted for anything, let alone a TV show. The fact that a political anti-globalisation web site (which is a good site by the way) is highlighting it as a way of showing the dumbed down intelligence of modern human beings, makes it particularly interesting, but the fact that it was written the day after, and he had all the figures at hand to write with, and he’s not a professional journalist, says something about the modern world.
Regarding Washingtonienne, the whole story runs for only 13 days, that’s less than two weeks. The blog started on 10th May, built up popular acclaim in the space of about a week and a bit, then spread across all the tabloids in Washington, the woman was outed, fired, interviewed and the story finally closed and forgotten by 23rd May. Welcome to the Internet!
Today Engadget posted a rumour (from AppleInsider) that the next iPod would have direct audio input, using a built in MPEG-4 or AAC encoder. If this is true, then I’d seriously consider this the MiniDisc killer, which would be amusing considering every one of the couple of hundred MP3 players released in the last 12 months were supposedly iPod killers.
For background on MiniDisc, see this post of mine from earlier on.
What does audio input give us? Well, you can plug in a condensor microphone for starters, and do away with other personal recording devices. You could also plug in the outputs of other equipment like home electronics, handheld devices, concert mixing desks for bootlegs etc. And the beauty of iPod is that you just take it home and everything gets sucked out into iTunes, which you can then drop into an audio app of your choice, edit, mix and burn and you’re done. Using a professional boom mic, you can record high quality sound to the iPod and transfer it directly into Final Cut Pro or iMovie.
DJs are already replacing CD collections with iPods, and it won’t be long, assuming the audio input rumour is true, before we’re able to mix our own audio at any time and place we wish. Random access, digital, high quality audio, directly transferable to and from Mac and Windows, software upgradable sound quality, and a USB/Bluetooth connection. I’m sorry, but that’s a MiniDisc killer. No wonder Sony are suddenly releasing so many devices based on hacked MiniDisc technology, as they’re about to have 15 years of technology development made redundant virtually over night. A classic example of product panic. You probably won’t believe me, but I actually love Sony products, and most of my home electronics equipment is high end Sony, but aside from a period of about 4 years where it was relevant, MiniDisc is a flawed late 1980s technology that I at least won’t be sad about seeing disappear. Goodbye and good riddance.
Today Engadget posted a rumour (from AppleInsider) that the next iPod would have direct audio input, using a built in MPEG-4 or AAC encoder. If this was true, then I’d seriously consider this the MiniDisc killer, which would be amusing considering every one of the couple of hundred MP3 players released in the last 12 months were supposedly iPod killers.
A little history of MiniDisc is probably in order. Back when Sony and dutch company Philips invented CDs, we suddenly had digital audio in our lounge rooms, cars and even in our Walkmans. This was a great money spinner for Sony and Philips, not because they could sell CDs, as Sony wasn’t actually in the music business at that stage, but so they could sell their manufacturing plant technology and the compact disc certification mark to the consumer electronics companies. Only Sony and Philips had developed the CD manufacturing technology, electronics company were required to license the playback LASER technology from them, and the record companies were required to pay for the privilege of having that little compact disc logo on their product. This is why the current CD DRM technologies which prevent digital copying of CDs has Philips a little frustrated and Sony in a bit of a schizophrenic quandary, because the DRM doesn’t actual conform to the Sony and Philips standard, and therefore cannot use the compact disc logo, which ultimately means they don’t have to pay for it either. Sony of course is now in the record business, having bought Columbia Records back in 1989. Also, you can tell the difference between the Philips and Sony manufacturing, due to the see through plastic centre on a CD, which is clear for Sony and opaque for Philips. But I digress.
The problem with CDs, and why people were still buying cassettes, was because the CD was read only, and home equipment that could manufacture a CD seemed a long way away, until of course Pioneer invented the technology to do it. At least my memory says it was Pioneer, so I may be wrong. In fact I searched the CDR FAQ and I couldn’t find a reference to it, but I’m sure if you email the maintainer, Andy McFadden, who is also an old Apple IIer like me by the way, he’ll track down the answer for you.
So to plug the gap, Philips invented the Digital Compact Cassette (or DCC), a digital version of the old stereo cassettes we knew and loved, which made sense, considering they had also originally invented the cassette to begin with. Digital audio, in it’s raw form, is simply a series of values representing the position of a waveform over time, in the case of CD, 44100 samples per second at 16 bit resolution. 44100 or 44.1KHz was chosen because the maximum frequency that our ears can hear is around 22KHz or so, and 44.1KHz gives you at least two samples per wavelength, at that frequency, which should represent as close enough to the positive and negative amplitudes of the wave for playback. The original Fairlight music computer sampled at 50KHz by the way, and DAT tape, while variable, is able to sample at 48KHz, which is why DAT is still so popular. These samples are called PCM or Pulse Code Modulation, and are the basis of digital audio. Anyway, in order to store the huge amount of data required to store digital audio, Philips came up with a technology called PASC or Precision Adaptive Subband Coding. The basic idea was that you chop the incoming audio into a dozen or so frequency (or subband) bands, ranging from low bass sounds up to high 22KHz, removing sounds which probably can’t be heard from each band, and then joining them back together again. This effectively compresses the data, but it is of course lossy, so every time you record with it, you’ve lost data from the original waveform. However this was fine, because by designing it to effectively be good for only one generation of copying, you have a built in DRM. The problems with PASC, were that the bands were divided equidistantly across the spectrum, whereas sound is inherently logarithmic. This meant that the lower bands actually represented more perceptive range than the upper bands. Perhaps this was supposed to address the compression of harmonics problem, but I’ll come to that a little later. Anyway, DCC failed. It wasn’t random access, so you still had to fast forward and rewind, the PASC obviously wasn’t ideal for home taping, and the audio head was still using metallic particles oriented via magnetic forces (the same as in standard cassette tapes), to simulate a purely digital recording format. With these type of recording heads, like in standard cassette tapes, the tape itself rubs against the head, causing both the tape and the head to wear down. But the big reason it failed, was because of Sony.
Sony came up with MiniDisc, using a similar analog head rubbing against metallic particles technology, but developed it as a rotating disc, giving them random access capability like a CD. They also developed LASER based guidance for accuracy, so that the analog read/write head could use more closely spaced tracks and subsequently store more data. Sony also developed their own compression scheme called ATRAC or Adaptive TRansform Acoustic Coding, which works similar to PASC, but divides the signal into 52 logarithmically divided subbands instead, giving each band equal importantance in the spectrum of hearing. Having killed off DCC, Sony is still flogging this 1980s based technology as modern audio equipment.
The big flaw in PASC and ATRAC is the fact that sound, particularly in music, is based on harmonics. A simple note played on a guitar for example, such as A, which is 440Hz, isn’t just 440Hz, it also generates harmonics at doubling intervals, so 880Hz, 1.7KHz, 3.5KHz etc. The problem is that these harmonics fall into different subbands when compressed, and may or may not be removed if the compression feels like removing them. So, pull out a couple of harmonics, and you end up with a more echoey or thin presence of the sound. This is the basics of why MP3 and the rest is so crap at low sampling rates. The importance of harmonics tends to be lost on technologists, which is why audiophiles still love vinyl, and a lot of professional recording is still done in the analog domain.
MiniDisc and DCC use lossy compression, same as MP3, AAC and MPEG-4, they’ll all degrade through successive generations of copying. That’s why the record companies aren’t completely up in arms about this, because most audio luddites will rip music at some really low encoding speed and resolution, which makes it sound tinny and echoey, and won’t realise how bad it sounds. A recent article by Jupiter Research claimed that with personal devices, particularly MP3 players, increasing their storage, there was limit at which people would want probably no more 1000 songs, and therefore was just increasing memory size for the sake of publicity. What they fail to realise is that increased disk storage actually means the capability to finally return to raw non-lossy PCM encoding for much higher quality audio. I can finally toss that 1MB song away, and have a perfect digital copy at around 60MB instead. As bandwidth and storage increase, lossy compression such as MP3 will become a distant memory and a short 20 year period in history, which we’ll look back on with melancholy.
Now, where was I? I don’t believe I’ve remembered all this crap. Oh yes, the new iPod, the MiniDisc killer. This needs a new post.
Every few years, I get into an argument a discussion with someone about why Apple’s platforms are inherently better designed for users and usability than competing platforms, whatever the domain. So far the iPod seems to be the exception to the rule, but not excessively so.
The problem is that in most cases you can’t really argue the point, particularly to a Windows or Linux fanatic, because their reasons for liking their preferred platforms typically bear no resemblance to usability. Although all three of these computing platforms are moving closer together, the step from the Windows to Mac desktops is still at least as far as from the Linux to Windows desktops. Anyone upgrading, from a Linux to Windows desktop, and I choose the term upgrading intentionally, is more often than not amazed at the new found usability and consistency, so their argument is that anything more would be simply nit picking, or purely subtle or academic improvements. I’ll liken that to the person upgrading from a horse and buggy to a Model T Ford, not realising that the Mercedes-Benz S55 AMG would probably make their driving experience a lot more pleasurable. Please note that I’ve played fair by resisting the obvious stereotypical Ferrari comparison.
But ultimately, a 15 minute argument isn’t going to convince a Windows desktop nut, who is an expert in Outlook 2003’s weird arsed assortment of UI controls and who has already decided to have an argument about desktop usability, that Apple designs are better. The best you can probably do, is use that old chestnut of pointing out roughly how much they don’t understand about UI design, and then let them feel a little inadequate for a few hours. Because if they did understand it better, or knew how much they didn’t know, they certainly wouldn’t have started such a dumbarse argument in the first place.
I recently bit the bullet and moved my Windows task bar on my work machine to the left side of the screen, to match both my home Windows box and my Mac OS X dock setting. It reminded me of Bruce Tognazzini, who amongst other things spent 14 years at Apple and founded their Human Interface Group, and as far as I’m aware the only computing company at that time to have a group dedicated to defining and enforcing the rules of user interaction with a computer, or at least desktop GUIs. My task bar change was instigated particularly because of Fitt’s Law, which I was reminded of recently while using some Windows application which forced me to do everything in little task steps through the main menu bar, causing my hand to go partially numb. Fitt’s Law, amongst other UI basics, is better described for UIs by Tog. In fact, reading through that page reminded me how much there is that you need to know before you can make intelligent UI decisions, and how much of the theoretical stuff you consciously forget over time. It frustrates me when I can see a broken UI, but can no longer argue why it is broken.
Anyway, I wasn’t planning to go into a long rant about interaction design or how good Apple are, because, yes you guessed it, like that’s going to convince you, right? The point of this post, before it went astray, was to highlight Apple’s possible new MiniDisc killer. In fact, because I’ve wasted so much space, I’m going to move it into a separate post.
Most, if not all shareholders in public companies, do it to make not just money, but more money that they’re gambling on the stock. We’d all like more money, and unless you’re already set up for life, don’t even think of emailing me to say you don’t. But traders, particularly day traders, tend to sit more at the greedy end of the spectrum. Having said that, when public companies are answerable to shareholders, it is usually the short term gains that take priority, with the losers being more often than not, morality, human rights and the environment.
So it is refreshing whenever I stumble across a public company which has managed to debunk the myth that doing what is right is the antithesis of a profitable business.
Visit the L’Oreal web site (yes, they make hair colour products, so don’t start OK), and the first thing you see, apart from all their Flash crap, is a picture of a wooly mammoth and a news story about how L’Oreal researchers are doing fossil hair research. In fact, the four news items currently on their home page are: the wooly mammoth; a story about how L’Oreal’s Welsh CEO, Mr Lindsay Owen-Jones made Time’s most influential 100 business people list; a story about the company’s sponsorship of a business student contest; and a piece on L’Oreal’s commitment to socioaesthetics, which they explain is…
the practice of aesthetic care for ailing and weakened persons weakened for psychological, physical, or social reasons.
The only explanation I could find for socioaesthetics by the way, was this:
- The entirety of a set of internally codified aesthetic values as perceived by most members of a society.
- That which is created through the verbal and noverbal communication of one’s everyday experiences, in relation to the level of positive consensus one’s community has amongs its members, regarding that experience.
…which is not exactly the same thing.
Regardless, 50% are business related stories, and 50% are, at least at face value, good deeds.
The other interesting thing you notice on their site is a complete subsite for their 5 Exceptional Women of Science award, in conjunction with UNESCO, sitting next to the news stories, just under the main left menu panel.
All up, it looks like these guys are doing good science, caring for people, and promoting their morals and good will as the most important aspect of the company. This seems even more significant by the fact that they were recently voted as the number one company in Europe that people would like to work for (subscription required). They’re a French company by the way.
Compare this with their competitors’ web sites: Goldwell, although they do support the Young Survival Coalition with “promotional activities”; Schwarzkopf; and Clairol (turn your sound down).
I don’t claim to know if L’Oreal are doing everything right, but based on the relatively small amount of research I did, when a company both talks up and is proud to put their ethical, environmental and scientific achievements at the top of their web site, you can’t help but sit up and take notice. It bodes well for the future of publicly owned companies and the future of our planet. And completely by accident, they also make the Blue-Black hair dye I used when I was a good corporate citizen. Small world indeed. Now, if they only did a plain blue…
While finding a hyperlink to Malcolm Gladwell’s The Tipping Point for my previous post, I read through some of the more recent (May 2004) reviews for the book on Amazon. Most were 5 stars, except one review of 1 star which said that Gladwell was simply stating the obvious, and that the book did not really shed any new light on what we already know. It’s interesting that as time goes by, newer generations benefit subconsciously from the research of their ancestors, without realising that there was a time when the idea was still unknown.
Apart from the fact that this reviewer is too stupid to give an objective and contextually fair review of the book, and should realise that at the time it was published, this was a defining work in the domain, it reminds me of what I call “the duh factor”.
The duh factor is when you explain to someone how something works, why something is the way it is, a new piece of research, or some other piece of insight and they just look at you and say “Oh duh. Didn’t you know that already?” The problem is that this may have been the first piece of research to actually prove this most obvious of insights, but the person you’re speaking to is too busy trying to “one up” you to notice.
As at today, in the U.S., G.W. Bush is more unpopular than he is popular. “Yeah, of course he is!” says the person you’re speaking to, before you mention that this is the first time official polling results show the actual percentage of unpopularity being above 50%.
The duh factor. I find the best reply to someone giving you the duh, is to just punch them in the face. Your mileage may vary. Oh duh!
Here’s a great article on undercover marketing, utilising ideas from Malcolm Gladwell’s The Tipping Point and related research.
SCENARIO NO. 4: The Hasbro company recruited 1,600 cool kids, aged 8 to 10, in Chicago and paid them each $30 to play a new hand-held video game called “Pox” and tell their friends about it.
I read about this when it happened back in 2001, and when the initial anger about brain washing minors subsided, I started to wonder to what depths marketers would sink to, in what they’re now referring to as peer to peer, or p-to-p marketing. This article gives a few examples, but the marketers are saying that they only use people who are genuinely excited about the product they’re promoting, or to put it more directly:
“When I’m running a program for a car company, I want that guy out there to be the living, breathing embodiment of that brand.”
So where does that put us ethically? The line between marketing and enthusiasm starts to become blurred, and when people are rewarded or awarded free goods as opposed to monetary payment for their excellent promotionalenthusiasm skills, what does that mean for us advertising cynics like us? The lines between advertising, promotion and enthusiasm are about to get a lot grayer (U.S. spelling courtesy of the Macquarie Dictionary’s anti-Australian conversational English stance).
It was Ayn Rand in one of her essays from the 1966 book Capitalism: The Unknown Ideal, that really ended my rather naive idealist preference for socialism, when she explained that the idea of capitalism, is the optional materialist advance of the individual, but not at the expense of others. This not at the expense of others has been a semi-guiding principle for me ever since I can remember. Yet it seems to have limited adaption by most of the world’s population.
Why are people such selfish arseholes?
I found myself in a situation recently where a person had the opportunity to gain from an interaction with me, without any loss on my part, but at the expense of others who were not aware of the opportunity. This person had to improvise a story that I would believe, but fortunately I was able to see through it.
Although sanity and equality prevailed in this case, what makes people want to fuck over other people for their own personal gain? I don’t care whether it is as simple as jumping a queue at the cinema, or who gets served first in a store, most people seem to take the opportunity if they are able to do so. I don’t. If I see someone waiting to be served for example, and I see someone pushing in front, even if I have no position to gain and the store keeper didn’t realise the customer order, I can’t stand by and let it happen without saying anything.
Selfishness is a character trait, and dare I say it, a character flaw. It helps build the fucked up greed driven world we live in, and will contribute to our ultimate downfall.
So, if you’re thinking of pushing in front, or adapting your story to suit yourself, please think again, or you may just in invoke one of my particular character flaws, revenge.