USB 3.0 will be here in just a few seconds!

That's how fast it is!

In yet another installment of: "Oh, I can't believe people use USB when Apple's Firewire is so superior" .... ti ti ti ti ti* ... "Apple no longer supports firewire, you will now all use USB" ... USB 3.0 will be released Monday. USB 3.0 will transfer 25 GB in 70 seconds.

To put that in perspective, the same transfer would take 13.9 minutes with the current USB 2.0 protocol and 9.3 hours on USB 1.0. Looks like the future of wired syncs and backups is bright and blazing. [LH]

Well, my next external hard drive is going to have that ... it will probably be faster than my internal hard drive.

__________________________
* 'ti ti ti ti ti' is KiLese (a Central African Language) for 'in this part of the story, time passes...'

Tags

More like this

This is not exactly on topic; but can you imagine how much shorter Tolkien's works would have been had he known of that bit of KiLese?

Well, my next external hard drive is going to have that ... it will probably be faster than my internal hard drive.

It would be pointless to use an interface that fast for an external hard drive, since the interface would be much faster than the hard drive itself. More likely uses are RAID volumes and video links.

By Virgil Samms (not verified) on 16 Nov 2008 #permalink

Three things:

1. Firewire already defined a "S3200" mode back in 2002 that is even faster. ("(25 gigabytes) per (70 seconds) = 2.85714286 gigabits per second", sayeth Google, and S3200 is 3.2 Gbps.) It was defined in the same standard that created "Firewire 800".

2. 2.8 Gbps sustained or burst?

3. Will the host hardware actually support that kind of speed? Many machines can't even keep up with USB 2, because it puts most of the load on the CPU. Firewire delegates the work to the interface hardware and supports memory mapping so the CPU doesn't need to get involved nearly as often.

By Benjamin Geiger (not verified) on 16 Nov 2008 #permalink

But, Benjamin, are you a Mac user that didn't get the memo from central HQ? They dropped firewire. This either means that USB is better, or that Apple does not actually always pick the best hardware ... they just say they do.

I honestly have no opinion on this. I'm just observing from the outside. If firewire is inherently better than USB, why did Apple drop firewire (or did they really?)?

They didn't drop Firewire, except on the new MacBooks. I think they consider it a "pro feature" now. (Which would imply that it's better than USB.) Silly, if you ask me, but...

By Benjamin Geiger (not verified) on 16 Nov 2008 #permalink

Interesting. So I'm not throwing out my firewire cables yet although I have very little to hook them up to.

I doubt Tolkien's works would have been shortened significantly by a knowledge of KiLese, but I am curious: how is "ti ti ti ti ti" pronounced?

Apple has not dropped Firewire per se, but it appears to not be a priority for them. Firewire is still the preferred interface for pro audio devices, higher end digital video cameras (Red for example).

Firewire was originally developed as a high-speed, plug and play serial interface for streaming large amounts of data from hard drives, audio and video devices back when USB was simply a low-speed serial bus meant for keyboards, mice and similar devices.

Since then, USB 2.0 was developed and became more popular than Firewire due largely to Intel's refusal to properly support Firewire (competition and all that). It competed pretty nicely with Firewire 400 while Firewire 800 never really caught on beyond the pro market. Firewire, for technical reason, still remained a "better" spec for use with HDs, video and audio.

Now, with the USB 3.0 spec out which will (finally) properly handle streaming data and large data block transfers, it seems Firewire may have been obsoleted. From my perspective, both worked just fine for me. I saw no real difference in my day-to-day use.

I am really looking forward to USB 3.0 as an addition to eSATA as a high-speed bus for large hard drives - especially for HD video and digital cinema work.

FireWire's primary goal was not so much the mass storage devices - though it does (did?) a better job on them too - as it was the isochronous media streaming. The resulting very low latency and jitter (6 or 7 PCM stereo audio samples at 44.1KHz and <1 sample respectively, if the supporting hardware is properly designed, for FW400) made it the only serious option for external storage devices in the pro audio and video market, and especially for Digital Audio/Video Workstations. And the Apple OS philosophy both powered and followed that. FireWire also has a much more flexible connectivity topology. In particular it supports direct peer to peer connections, which USB does not, and is not a polled protocol like USB. Not that anybody seems to have taken much advantage of the P2P feature in the consumer market. Also, most FW devices have pass-through connectors, and there is no cabling difference between the ends. Unlike the A, B, mini-B, male/female adapter mess that is the current state of USB: I have never seen a hub on a USB drive for example, so you cannot daisy-chain them.

One of the niftiest features of the Mac laptops - a real life saver when you need it - has been their ability to boot up as a FireWire mass storage device, a trick which allows one to recover data should the drive fail to boot the OS, also as a machine to machine user account migration tool. I wonder if they plan on retaining that for their USB 3.0 ports?

Speed: if the data path is all hardware, DMA over the current memory busses should be close to full speed. 200 MHz with 32 bit word transfers seems eminently doable these days. And many designs go wider - up to 128 bits - to further relax the timing requirements. Only the cheaper designs that trade off buss width for price will not keep up.

I am not sure yet how I feel about making up for inelegant design by adding more raw horse-power. I keep seeing it happen, and each step down that slope seems somehow to have taken a bit of soul with it. I am not sure that going 4x over FW800 speeds is actually going to result in better DAW performance. It is like rounding errors - most people get only the end result from the media producers, which these days is FM quality mp3's and blocky mp4 video. But to get even that level of end-user quality the original raw source material and the processing gear have to be very precise, high quality, and large capacity. Will USB 3.0 be able to service the pro market? At that small a speed margin, and with the USB protocol legacy, I am doubtful. And I am really surprised that Apple dropped FireWire from their laptops. There are legions of performers who take them on-stage and absolutely rely on their FW ports for their rig. Towers will not cut it. ADAT ports only exist at the end of FW cables.

By Gray Gaffer (not verified) on 16 Nov 2008 #permalink

USB needs to be destroyed. If you've ever tried to implement it you'll know what a disaster it is.

1. The keyword in "Universal Serial Bus" is the word "Bus". It has a DMA like protocol that extends an intra-machine bus out onto the wire, there is very little serial about it at all.

2. The "bus" is mastered by a single node (although new versions of the protocol attempt a negotiated sharing arrangement), lose that node and the whole thing falls apart.

Think of (1) and (2) as being like taking your front-side bus outside the box via a cable - that's what USB is doing. It sucks.

3. The "bus" and mastering is further compromised by the protocol that

a.) resets the bus every second

b.) requires the master to then poll every device on the bus to find out what's there, what it's capabilities are and what it requires in the next second so that it can schedule and allocate a time slice to that that node (come back token ring, all is forgiven). This is close to lunatic.

4. Because of all these timing requirements the "bus" is limited in extent. It needs an *active* repeater (aka hub) after 5 meters and hubs can only be chained to a depth of 5 (or basically 128 nodes). This is horrible in industrial settings (or even many household ones) where you need a cable run of longer than 25 meters. Also often you can't get power to those intermediate hubs.

The overall problem here is that many applications of serial communications have long cable runs, but USB has meant that true serial has dissappeared from the hardware world.

5. How good is the "device independence"? Absolutely lousy. If I have a device at the end of a cable the protocol link to it via USB is very low level - to have the device work at all I need a heap of intelligence on the master node which is essentially running the device in software (it's a "bus" remember, more analogous to the wires between your CPU and memory that that old RS-232 cable).

This means that if I can't get a driver that runs on the master node (and in the case of Windows the specific version of the OS together with the required subset of DLL's) then I'm hosed.

USB == disaster

The people responsible should be shot.

Apple still has Firewire ports on the MacBook Pro, just not the MacBook or MacBook Air. As I said, they seem to consider it a pro feature now.

By Benjamin Geiger (not verified) on 17 Nov 2008 #permalink

We build our Linux PCs from parts. I have noticed over the last couple of years that FireWire ports are becoming more of a rarity. USB seems to be winning. Much like in the OS world, this has nothing to do with technical superiority, but everything to do with market share. We're just going to have to live with it.

By Virgil Samms (not verified) on 17 Nov 2008 #permalink

VHS vs Beta
IDE vs SCSI
SD/MMC vs CF
USB vs Firewire

Firewire will persist for a while in a niche because it can actually do some things USB simply can't (similar to CF vs SD/MMC. CF is a general device interface (like PCMCIA), SD/MMC is just memory.)

I second the USB design being a horrible mess. It is a bit like 802.11 where it starts with a crappy framework and includes additions up to the veritable kitchen sink to make it actually work decently in practice. However, that leads to a standard which will forever be crippled by fundamental problems and is a serious pain to implement in anything resembling a complete way. Quite frankly, it wasn't designed for mass storage or 90% of the other application it is used for.

On the other hand, it cheap to implement in a mostly functional way which has made it pretty ubiquitous (IDE vs SCSI).

I have a personal nightmare story though... I'm doing array recordings on custom hardware which writes out to CF flash cards. Anyway, I end up with a stack of cards that have to be copied (~34GB worth each day working in the field). Should be easy, right? Well, different USB2.0 card readers end up having wildly different (factor of 10) transfer speeds reading these cards. Same manufacturer, same product, different revision even. The best reader I had (a no-name brand $10 job) is on its last legs due to field-work wear and tear, and I'm having a hell of a time finding a new one which is 1/2 as fast. Basically, I have to go buy a stack of different readers, test them all and end up returning them! Of course, I'm working under linux with ext2 filesystems, so most of the grafted-on crap to make USB mass storage of VFAT fast isn't applicable and seems to actually make preformance much worse.

travc: you might want to consider switching to ext3. Flushing is late on ext2, can be very late. Custom or embedded hardware tends to encourage shutting down by power switch rather than command. This almost always results in filesystem corruption. The good news in my case is that so far e2fsck fixes things, but I really do not know what was fixed or if I have creeping corruption at a higher level as a result. So I am girding to do the change. ext3 is at least journalled.

By Gray Gaffer (not verified) on 22 Nov 2008 #permalink