I just watched the always entertaining Peter Paul Chato’s video on the performance of the M1 Mac compared to his Intel and AMD hackintoshes. I had a few thoughts on the performance he was seeing in Final Cut Pro, as well as the cost aspects compared to its Intel counterparts, which I communicated in a couple of comments. So I thought I’d post those thoughts here (edited, since this isn’t a direct response to the video, as such) in case a wider audience may be interested.

It would be interesting to see how performance compares in Premiere, since I’ve heard it does encoding differently to FCP.

As stated in the video, no Intel iGPU in the Ryzen means no quick sync, hence worse h.265 encoding performance than the Intel box. I suspect the reason the Intel box is using both the iGPU and the Radeon for rendering is possibly because it’s using OpenCL to do the first pass on the Radeon, then the iGPU to do the actual encoding. Although that’s just a guess. I’m also guessing the Ryzen is better on h.254 because it’s just using the Radeon alone, which may have a better encoder than the Intel iGPU, but they haven’t bothered updating FCP to use the Radeon for h.265 encoding, so it falls back to the CPU. Dedicated hardware will always beat the general-purpose CPU (which is why the M1 will always beat everything with ProRes – it’s got dedicated hardware for decoding and encoding).

As to why they haven’t bothered updating FCP to use the Radeon for h.265 encoding? At first blush this doesn’t really make sense. My understanding is that the mac pro is based on the Xeon W-3200 series (Cascade Lake-W), which also means no iGPU, so does that mean the Mac Pro also sucks at h.265 encoding? I haven’t checked reviews, but I suspect the answer is no.

The reason that answer is no also explains why the Ryzen is so bad at h.265, and why they haven’t updated FCP to use the Radeon to encode it. The reason is because of the T2 chip.

Yep, that security chip isn’t just there to attempt to thwart hackintoshing (but only succeeding at punishing their genuine customers when it fails), they also sneaked an HEVC encoding engine into the chip because it was designed for the iMac Pro, which uses Intel’s Skylake-X based Xeon-W series, and hence didn’t have an iGPU.

That also explains why the Ryzen falls back to the CPU for h.265 encoding, and why the Intel box failed at h.265 with the iGPU turned off or running on the iMac pro SMBIOS. It first checks for an iGPU for quick sync which it doesn’t find, so it looks for a T2 which it doesn’t find, so it falls back to the CPU.

I guess all that was a long-winded way of me explaining why I’m curious about Premiere performance. I’m guessing they don’t have access to the HEVC encoder in the T2, so I’m curious whether it would use the Radeon instead. Or if they do have access to the HEVC encoder in the T2, whether they utilise it over the one in the Radeon.

Which begs the question, why roll their own HEVC engine when there’s one in the Radeon? My guess is for simplicity. There’s only one model of T2, whereas there are multiple different Radeon options in the iMac Pro. If they don’t all work identically, then FCP has to figure out which one it’s using first and change how it utilises the HEVC encoder depending on the model. And that’s before you start talking about future driver updates changing how things work too. The T2 works, and Apple control the drivers and how to access the HEVC encoding engine, so there’s no need to look any further.

In terms of the “cost” of an M1 compared to an Intel, I suspect Apple doesn’t get a huge discount from Intel over their standard wholesale price. Why? Macs aren’t a huge seller compared to the volumes that Dell or HP or Lenovo sell, so they’re not likely to get as much of a discount as the bigger vendors. That’s the entire reason for Apple moving to their own silicon. Sure, they probably could get a better deal from AMD – heck, they could probably ask AMD for custom processors with whatever special features they want built-in, which is what Microsoft and Sony are managing to get from them with their xBawks and PrayStations. But once you amortise the cost of their own processor development across all their product lines – from Apple watch, to iPhone, to iPad, to Mac – paying silicon foundry wholesale prices will be way cheaper than the price any of the bigger PC vendors pay for Intel chips.

And considering how they’re offering these Macs for similar prices to their outgoing Intel predecessors, I’d say they’re not lumping the full development costs on these models up-front.

But in my opinion, the M1 isn’t as revolutionary as they’re making it out to be anyway. It’s basically an iPad chip with a few extra features thrown in, and the clock and TDP wound up. There’s no skunk works that’s been hammering away at this separately to the A-series since the first one was released 10 years ago. You can bet the same people who worked on the A-series chips worked on the M1. The initial development costs would’ve been minimal, thanks to all their work on the A-series. And unless we see designs significantly diverge from the A-series, I doubt ongoing development costs would be that huge either. The cost savings over paying for Intel chips will more than pay for R&D.