IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New kernels that come out of distributions
I am trying to understand the process. Linus et al maintain the "official" kernel, deciding what patches get in, etc. Then a distribution takes the official kernel and adds its own unofficial patches... because... why? Shouldn't the official kernel be sufficient? And if each distribution modifies, sometimes markedly so, the official kernel, then what is the official kernel used for? Is it simply a standard, or is it actually used for something? I know these are very uninformed questions, thanks for looking and thanks for any light you can shed on this for me.
New Why not?
"Then a distribution takes the official kernel and adds its own unofficial patches... because... why?"

Depends on the distro. Reasons can range from device support to support for products like Win4Lin, or other features. Consider the base kernel to be a reference platform, and the custom kernels used by some distros to be ways they can differentiate their products.
[link|http://forfree.sytes.net|
]
Imric's Tips for Living
  • Paranoia Is a Survival Trait
  • Pessimists are never disappointed - but sometimes, if they are very lucky, they can be pleasantly surprised...
  • Even though everyone is out to get you, it doesn't matter unless you let them win.


Nothing is as simple as it seems in the beginning,
As hopeless as it seems in the middle,
Or as finished as it seems in the end.
 
 
New Distro kernels are heavily patched...
...often in ways that allow the various distribution packaging systems to do their thing.

Without Ubuntu patching and packaging their kernels, I wouldn't be able to do "apt-get install fglrx-driver" and have any reasonable hope of successfully installing the ATI binary video driver, for example.

Custom kernels are really only for people who have very specific needs; the distro kernels are pretty much sufficient for most folk.

And the kernel.org kernel is really for people who don't have much better to do than run make config :-)


Peter
[link|http://www.ubuntulinux.org|Ubuntu Linux]
[link|http://www.kuro5hin.org|There is no K5 Cabal]
[link|http://guildenstern.dyndns.org|Home]
Use P2P for legitimate purposes!
New Some distributions...
Have patches to take out non-free stuff, (Debian Free Software Guide in regards to what non-free is).

Some also add support for problem hardware, some have a plethora of patches they maintain until upstream (Linus' Kernel) integrates or "fixes" the problems or bugs or what ever. That is Debian, one of the largest give-back distros. As Greg Kroah-Hartman is the Gentoo kernel Maintainer and a Core Developer for the Official Tree contributions to be integrated into Linus' BK Tree, He boasts that there are only 3-5 patches at any one time in the "newest" Kernel package as compared to the Reference Vanilla Kernel. Gentoo also has the best "build" testing know to man... because everyone builds in Gentoo (well almost, as there are prebuilts)

Personally, some of the Esoteric patches RedHat has had for a number of years, is their own VM for the Kernel. It is still based on the pre-2.4.10 VM that Linus tore out and put a new one in for 2.4.10 at nearly the last minute (in terms of Development timeline that is). <rant>That single patch annoys the SHIITE out of me. It makes the machines Based on RedHat's patch set seem sluggish. Well, not really sluggish, but not as quick-feeling as I'd like. It like the old Rochester Quadra-Jet. It always works when clean and properly maintained, except for when the springs controlling vacuumm response to fuel flow get weak... there is a bit of stutter when you goto WOT and sometimes it back-fires once or twice *THEN* takes off like all hells fury. The one thing this patch changes is the response to HEAVY load. It doesn't start wailing on itself like the "new stock" VM does, flailing memory in and out of disk swap, scouring the Memory for free pages to scrub, basically a machine that is too small for the job intended. If you have a machine that is more than you need, the VM takes performance away from you. Leaving about 3%-15% (depends on the arch BTW) is not my idea of a good thing. </rant>

There are other distros that Heavily modify the source, they like to see familiar names in the messages scrolling by. By familiar I mean theirs.
--
[link|mailto:greg@gregfolkert.net|greg],
[link|http://www.iwethey.org/ed_curry|REMEMBER ED CURRY!] @ iwethey

[link|http://it.slashdot.org/comments.pl?sid=134485&cid=11233230|"Microsoft Security" is an even better oxymoron than "Military Intelligence"]
No matter how much Microsoft supporters whine about how Linux and other operating systems have just as many bugs as their operating systems do, the bottom line is that the serious, gut-wrenching problems happen on Windows, not on Linux, not on Mac OS. -- [link|http://www.eweek.com/article2/0,1759,1622086,00.asp|source]
New My uninformed take on the matter
On 2.4 releases, the official kernel was a bare minimum
as compared to the distributions. This meant it did
not come with a bunch of file systems and certain
device drivers that the distributions wanted in.

If I wanted XFS (SGI's file system) I either needed
to patch it myself or go with a distribution that
contained it such as Suse.

It also might have had certain memory or dispatch
code that the distribution vendor thought was wrong
for their target audience, such as RH wanting certain
large memory patches that did not come standard.

People who got their kernel from a distribution got
the stuff in it that they already wanted/needed - or
were willing to do without such as no XFS in RH.

The vanilla kernel was almost NEVER used. But it was
a known baseline for people to apply their patches to.

And people who wanted a particular feature that was not
in the vanilla kernel and not in a distribution were
hopefully savvy enough to pull the vanilla kernel and
patch it. If they were NOT savvy enough, then they had
no business determining they needed something that was
not already in it.


As of 2.6, the official kernel really isn't something
that is "locked" down. It has a LOT more stuff "standard",
such as multiple file systems, process classes for prioritization,
SE-Linux for security, (the list goes on and on).

While in the past, an even number (2.2,2.4) meant a solid
unchanging release, it does NOT in 2.6. About 6 months ago
at the kernel summit, Linux Torvalds stated that he liked
adding new things the way they currently were, and it was
up to the distributors to determine what they wanted in
their kernel, and they would lock it down for their given
customer base.

So the "vanilla" 2.6 kernel is in a much higher state of flux
as opposed to previous even kernels. I would never run a
business off the vanilla kernel.
New Your misconception of the developement model
For the Linux Kernel is badly skewed. I am not trying to troll here. Please just read.

I am not arguing, that point about no 2.odd kernel series. I am arguing that do not know that the model never *DID* change. It has always been that way.

The Development stuff from the 2.1/3/5 or what have you series was being backported into the "Stable" Kernel as it was causing tremendous amount of work that really didn't need to be there.

The distributions were already "Stabilizing" the 2.0/2/4 kernels long before this announcement. Linus was not willing to branch again only to do the work under both trees. Instead, he just stated that things haven't changed just that they were clarified as to what was already happening for years. To this point, I had a discussion on this very subject with Greg Kroah-Hartman publically in the Linux Elitists mailing list, when the announcement happened, as he was the first to test it. Officially. I sad "How dare he test it out so quickly!" or some such.

The changes the Distros made ZIP--Zilch--Nada. They are still stabilizing the Linux kernel exactly the same way they were before, except the backporting of things from .7 to .6 is not needed. The devel is all happening in the Developer trees where it should have in the first place, Linus then pulls their tree and does manual merging and rejection... in the BitKeeper archive. These changes are all from the Development trees that are available as well. All of the kernel developers are doing tremndous amounts of testing, the same way "supposed" development model was, just that they don't have to track 2 sets of code. Now given that the APIs have not changed drastically, this lends support for the current dev model. Now Linus *DID* say that if there were some changes being applied that made HUGE swaths of the Kernel Unusable, he would branch to 2.7, until it was working well enough to merge back in 2.6 and then stop the branch. He also said, there might be multiple starts and re-merges of the 2.7 branch.

I read nearly all of the discussions on the LKML, I am subscribed, I am pretty much read only. It is interesting to watch/read.

The 2.6.10/11 kernel is far-far-far ahead of what the 2.4.10 kernel was in comparing 2.6.10 with 2.6.0 and 2.4.10 with 2.4.0. To tell the truth, there are far fewer problems in 2.6.10 than in 2.4.10.

I pick on 2.4.10 mainly that was a serious change that should tell you exactly what I am telling you (2.4.11 was BORKED so bad it was changed to 2.4.11-DO_NOT_USE.tar.gz or some such). It has been going on for years, this magically changed dev model.

Tell me why you believe the 2.6 kernel is not stable for enterprise/production use?
--
[link|mailto:greg@gregfolkert.net|greg],
[link|http://www.iwethey.org/ed_curry|REMEMBER ED CURRY!] @ iwethey

[link|http://it.slashdot.org/comments.pl?sid=134485&cid=11233230|"Microsoft Security" is an even better oxymoron than "Military Intelligence"]
No matter how much Microsoft supporters whine about how Linux and other operating systems have just as many bugs as their operating systems do, the bottom line is that the serious, gut-wrenching problems happen on Windows, not on Linux, not on Mac OS. -- [link|http://www.eweek.com/article2/0,1759,1622086,00.asp|source]
Expand Edited by folkert Feb. 28, 2005, 12:29:31 PM EST
New I said the "vanilla" 2.6 was unsuitable
Not the ones from the distributions.

There are many features that I consider enterprise ready, congealing in the 2.6 kernel. In any cases, they were externally available in the distributions for quite some time, just not RH, which is where I'm forced to be.

But of course, we will will not "move" to AS 4, but be stuck with AS 3, for quite some time. No direct upgrade path, and I would not take it even if there was. I assume the next Linux server we install will try AS 4, but it will really depend on the 3rd party applications be certified on it.

I have no argument with your description.

Like I titled my post, "My uninformed take on the matter".

New Mkay... now you are informed.
--
[link|mailto:greg@gregfolkert.net|greg],
[link|http://www.iwethey.org/ed_curry|REMEMBER ED CURRY!] @ iwethey

[link|http://it.slashdot.org/comments.pl?sid=134485&cid=11233230|"Microsoft Security" is an even better oxymoron than "Military Intelligence"]
No matter how much Microsoft supporters whine about how Linux and other operating systems have just as many bugs as their operating systems do, the bottom line is that the serious, gut-wrenching problems happen on Windows, not on Linux, not on Mac OS. -- [link|http://www.eweek.com/article2/0,1759,1622086,00.asp|source]
New Re: kernels that come out of distributions
I am trying to understand the process. Linus et al maintain the "official" kernel, deciding what patches get in, etc. Then a distribution takes the official kernel and adds its own unofficial patches... because... why?

There are a couple of common reasons to modify the kernel for somebody making a distribution. The biggest reasons are compatability, security and adding features. Adding features to a distribution that have not been included in the base kernel is one of the common reasons for the more specific distributions. Distribution vendors are also usually faster to include security patches then the kernel, particularly if the problem is theoretical or the solution is ugly.

But compatability is the probably the single largest reason. Distributions often include patches to make a kernel work with binary drivers or to fix problems with certain software or make it work with their distribution specific systems.

Shouldn't the official kernel be sufficient? And if each distribution modifies, sometimes markedly so, the official kernel, then what is the official kernel used for? Is it simply a standard, or is it actually used for something? I know these are very uninformed questions, thanks for looking and thanks for any light you can shed on this for me.

The only reason most people might want to use the official kernel is that they want to customize the kernel or they want to be involved in testing the working development kernel.

Jay
     kernels that come out of distributions - (scorsese) - (8)
         Why not? - (imric)
         Distro kernels are heavily patched... - (pwhysall)
         Some distributions... - (folkert)
         My uninformed take on the matter - (broomberg) - (3)
             Your misconception of the developement model - (folkert) - (2)
                 I said the "vanilla" 2.6 was unsuitable - (broomberg) - (1)
                     Mkay... now you are informed. -NT - (folkert)
         Re: kernels that come out of distributions - (JayMehaffey)

Pre-Pass Follow in-cab signals.
79 ms