I'd like to expand (!) on the original question and the answers for a bit. I think that the original question was on the relationship of cetane to compression ratio. I read it, the questioner was saying that if the fuel ignites at a specific compression temperature rise, then why and how could diesel engine designers justify having different compression ratios?
That's one of those VGQs.....a Very Good Question! Let me ramble on for awhile and see if anything emerges to help with the understanding.. Might as well get comfortable. This is going to require some concentration...
The problem implied is that if 17:1 would cause sufficient compression heat to ignite the fuel then how/why could a designer go to a higher ratio? After all, if the max compression ratio is higher than the compression required for ignition, how can there be any advantage? We know that as the piston comes up the compression ratio must pass through the ignition compression ratio before it gets to go higher. And we know that passing through the ignition compression will ignite the burn. So why go to a higher ratio? In fact, how can we even get to a higher ratio? Reading the engine specs, the ratios are different so it obviously happens, but what's the advantage? And how is it even possible?
The answer to that question is - as several others have stated - in in the burn time. It takes a finite amount of time to burn through a charge, and at the early stages of the burn the expansion of burning gases is slow enough that the piston can continue to rise. These are the things a designer plays with when he selects the (ever-changing) shape of the compression area and the timing of the fuel injection.
We know that the timing of the fuel injection is critical to a diesel. This is because the air is not throttled as is the case in a gasoline engine. Throttling a diesel is done only by varying only the quantity of fuel.
The question is a good one because the answer is progress. It is true that in the early days of diesel design all diesels had about the same compression ratio. That's where the questioner is reasoning correctly. It's was just as he implies, and the reason for the similarity is that the there isn't much choice when one controls the fuel injection timing mechanically. The mechanical system for controlling the fuel injection into a diesel combustion chamber is amazingly reliable but from the designer's standpoint it is fairly crude way to time and control things. That was because the injection pressures available 20 years ago were fairly low and the injection time was controlled by a mechanical cam. In the real world, there is a dollar and wear limit to how precisely that a mechanical cam can be shaped. The result of these limitations to the designer was that much of the variation in burn time had to be was controlled by the shape of the head and combustion chamber. The combustion area itself was varied geometrically and resulted in poly or multi-spherical combustion chambers. But the largest difference was in the use of direct versus indirect fuel injection. (you should read about those)
With the advent of computers the designer could now use an electrically controlled fuel injector. The mechanical fuel injection cam was now simply used to provide a positive preload of fuel into the injector. The actual fuel injection now happens at much higher pressures (and more quickly) by using a pulsed high pressure magnetic injector controlled by a computer. All of a sudden short time intervals and high fuel flow rates were both possible. The designers had a field day. The "resolution" of the timed injection pulse could be precisely defined. Multiple injections were even possible. And wear on the injector controller was no longer an issue. The first thing the designer did was to inject a tiny bit of fuel as the piston rose, which would then ignite spread burning gas to all areas of the combustion chamber - just in time for the main squirt of fluid which could occur later in the piston's rise and at higher compression because of how the little bit of burning fuel enhanced the main ignition. Turbocharging could be used to scavenge previously wasted fuel from the exhaust. The net result was that in the 1990s much higher compression ratios and temperatures were suddenly available to the designer.
Designers are still making improvements based on that radical advancement in injection type. So far, the higher compression ratios have been used mostly to make the engine more efficient, more powerful, and cleaner burning. There is still a lot of room for improvement. In fact, the improvements in engine efficiency that have been done so far are not very large when compared to the energy available in fuel. Much is still wasted in all types of internal combustion engines. There's a lot of work to be done.
Hope this helps. It was fun, and that's the real point.
rScotty