Post #70,469
12/23/02 1:09:54 AM
|
Re: Well, no, it's not a Linux thing at all
Thanks for the many insights - I find them interesting because at one time I was going to be the Country Product manager for Workplace OS that was to run on the PowerPC CHRP architecture.
We had prototypes & even did demos at trade shows then the CHRP plug got pulled by Lou Gerstner.
I also worked with a whole variety of Unix species that IBM had at different times. One included Mach 3. Also most people won't know that the first highly scalable Unix was ported by Bell Labs & IBM folk (at Piscataway NJ - sp) to a system 360/MP config machine. Some IBM folk did a Virtual Machine component that this Unix ver7 variant ran on. IBM mainframers killed the project in fear it would be successful & canabalise OS/360 sales. DEC then garnered that business.
One other Unix variant we had on mainframes was from another US university & it was called AIX/370 which in reality was just a name - it supported a distributed filesystem & could handle multiple server architectures but at IBM we were only interested in the System/370 version. It supported so called AIX on both PCs and RT/PC computers. I demoed that software on mainframes at UNIX UserGroup tradeshows on the exhibition floor in 1989 in SFO, & 1990 in Sydney & Wellington NZ. It was demoed in NYC in 1990 but I didn't make it to that show.
Mach was of great interest to me because it underpinned WorkplaceOS which was going to be a multi-personality OS. (Win, Mac & Nix).
Cheers
Doug
PS my wife doesn't care a hoot about all this lovely stuff as she just wants simplicity & consistency & Mac OS X delivers to her great satisfaction.
|
Post #70,473
12/23/02 2:28:17 AM
|
Workplace OS rocked
Hi, Doug.
I remember Workplace OS well, and am glad to encounter people who were involved with it, again. I used to talk about in my editor's columns, when I ran the 40-page monthly newsletter of the San Francisco PC Users Group. Damn shame that it never was released.
NeXT probably encountered exactly the same engineering problems that the Workplace OS team did -- locking and scheduling bottlenecks. Reportedly, that's why they fused the BSD and Mach layers.
Yes, IBM to this day remains really good at virtual-Unix implementations. One of my former firms worked with them to get massively parallel virtualised Linux sessions running on System/370 machines. They're able to run some absurd number of virtual sessions -- like several thousand.
My wife and mother-in-law both like MacOS X for the same reason your wife does (and my wife is the real coder of the family). It's a pity that almost all of its users have basically zero comprehension of it as a Unix platform: I'm a little underwhelmed by people thinking it's great running a Unix OS so they can do MS-Word, MSIE, and MS-Outlook. To their credit, Apple Computer at least do an excellent job of making it difficult for their predominantly technophobe userbase to hurt themselves.
A brief story:
During OS X's beta cycle and for most of a year afterwards, I participated on one of the main non-Apple-run OS X mailing lists. (We ran the beta on a G4 cube, and I admined it, mostly via ssh.) Many of the people there were relatively new to Unix platforms, but technically clueful (being early adopters), so we had a good time figuring out technical problems and helping one another.
Then came the week of OS X's public release, and the list was flooded with traditional MacOS users. The problem wasn't the inevitable decline in technical content, but rather the severe intolerance that arrived with them: Suddenly, I got anonymous hatemail every few days, demanding that I unsubscribe because Unix users have no business on a Mac mailing list. Also, one list-member kept trying to pick fights with me about alleged superiority of Apple operating systems and software for MacOS over anything Linux (his topic, not mine), which he justified (solely) by pointing to my domain name. Nobody else on the mailing list seemed to consider this unusual or unacceptable behaviour -- and several informed me that my presenting command-line solutions to questions was unwelcome, because it was "non-Mac" and because they felt somehow threatened by such.
I mentioned the problem of MacOS users knowing things that aren't so: I kept pointing out that it was in the interest of OS X users to install their systems on mostly "UFS" (which is actually FFS), rather than Apple's rather slow and fragile legacy HFS+ filesystem -- and I was able to describe in detail why. But the longtime MacOS people would nonetheless pronounce that this was completely wrong. Why? Because Apple Computer's MacOS X hard-drive preloads used 100% HFS+ filesystems. For them, that was absolutely the end of the discussion: UFS was just not Cupertino-blessed; therefore, it was bad.
There is an operational problem with some software when it is installed on UFS, but it's a problem with the software, rather than UFS: MacOS application coders have tended to depend on sloppy filesystem semantics characteristic of prior MacOS versions, in which filenames are case-preserving but not case-sensitive. (Such an app might create filename "Preferences" but read back "preferences": On HFS+, the original file is still returned. On UFS, those would be different files.) For example, Lexmark's MacOS X printer drivers have that HFS+ dependency.
The logical response would be to install those slightly-defective apps on one's small HFS+ "legacy" partition, while having the bulk of one's file on UFS. And file a bug with the slobs who wrote that code.
But the orthodox Church of Steve answer was "UFS is bad. It must be too Unixey, which is why it creates problems. If it were any good, Apple would preload onto it."
You remember all the people who insisted on installing OS/2 on FAT, because HPFS was obviously too complex? Same technopeasant faith, different denomination. (They never checked to see if their answers actually worked or made sense, either: The sole test of merit was whether an Apple-blessed authority said it or not.)
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #70,506
12/23/02 10:57:59 AM
|
good criticism, but only half right (re. filesystems)
It doesn't matter where the problem lies -- if UFS is a superior filesystem but the sloppy programs you rely on don't like, then you'll gravitate towards the filesystem that the sloppy programs you rely on will use. It doesn't necessarily matter if the new tool is a thousand times better: if you still need to use your old software, especially if its software you've made a significant investment on, you'll resist the change, because why go through all the headache?
This doesn't mean I disagree with your comments regarding the dangers of the "technopeasant faith" (though I do find that term more than a little arrogant) but there's no point in using the better technology if you can't run anything on it.
I never ran into the FAT v. HPFS problem in the OS/2 world, but that may have been a bit before my time. I loved HPFS. There was a similar furor over the latest filesystem, but there was a more significant problem there -- occasionally all your files would up and disappear due to some weird filesystem/kernel incompatibility (this was fixed in an update). At any rate, because of that a lot people stayed away from it for quite a while.
"We are all born originals -- why is it so many of us die copies?" - Edward Young
|
Post #70,508
12/23/02 11:11:45 AM
|
On FAT vs HPFS
I agree with your sentiments. A newer FS may be better, but if it causes more pain than a user's willing to tolerate, it'll have slow adoption.
I used OS/2 on FAT for quite a while. I started with 2.0 in May 1992. I used FAT because I didn't want to spend the $ on Gammatech's HPFS tools while Norton was a known quantity (and handled .EAs properly). There were occasional HPFS horror stories on USENET - critical bugs that needed to be fixed quickly, etc. For most users, it was a great FS from the get-go. But it made me a little nervous.
After eventually developing trust in HPFS on a test 2.1 partition, I only used FAT for a common partition with DOS. Now I wouldn't use anything other than HPFS for OS/2 - it's a great filesystem.
Similarly, when I first used Win95 I used FAT16 partitions until I developed confidence in FAT32.
I'd act the same way in moving to Linux - I'd start with ext2 before using JFS or ext3 or ReiserFS. While the latter FS are no doubt better, I feel it's better to start with an older FS and develop confidence before moving on.
YMMV.
Cheers, Scott.
|
Post #70,529
12/23/02 2:45:45 PM
|
Re: On FAT vs HPFS
Another Scott wrote:
After eventually developing trust in HPFS on a test 2.1 partition, I only used FAT for a common partition with DOS.
Makes sense.
Now I wouldn't use anything other than HPFS for OS/2 - it's a great filesystem.
It really is. Did you know: For a very long time, technical users of MS-Windows NT continued to embarrass Microsoft Corporation by finding inventive ways to keep NT's HPFS support working on NT workstations. Microsoft created that support in order to try to migrate over OS/2 users, but was continually embarrassed by users continuing to use that filesystem by preference on new workstations, as it was/is self-defragmenting (unlike FAT and NTFS) and provides many times faster file access. Microsoft has made it progressively more difficult, of late requiring you to hand-edit Registry keys and copy over system libraries from old NT versions. I don't know if it's still possible, not having any Win32 systems to play with.
I'd act the same way in moving to Linux - I'd start with ext2 before using JFS or ext3 or ReiserFS.
Yes. During the California power shortages of a year ago, many people suddenly developed a liking for journaling filesystems. I rebuild my server on SGI's XFS filesystem at the time, even though my distribution lacked support for it. (I [link|http://linuxmafia.com/~rick/linux-info/xfs-conversion|wrote up the process], to help others.)
Of late, after initially being very skeptical about ext3's overhead, I've found it to be so low (in all circumstances I've tested) as to be negligible, and therefore find ext3 really useful -- because it's so very easy to convert any ext2 filesystem to it. With the nice advantage that ext2 remains available as a fallback: Just remount as ext2, and you're done. Journal corruption is a non-issue. Just delete and remake it: No risk to data.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #70,762
12/25/02 6:55:44 AM
|
Re: On FAT vs HPFS
Yes - pinball.sys - was a reghack on 3.51 - and it still worked on NT 4.0 - although "this is not supported" yada yada. I had a machine that quintuple booted various Windows and OS/2 versions - HPFS always impressed me with its speed.
One of our old and missed members, Brad Barclay (aka Yazstromo, OS/2 god), wrote a nice summary of HPFS here some years ago.
-drl
|
Post #70,859
12/25/02 11:20:32 PM
|
Re: On FAT vs HPFS
FYI.. Yaz is a pretty regular poster to the mail list, if you're wanting to get in touch :)
----- Steve
|
Post #70,922
12/26/02 12:02:03 PM
|
Re: On FAT vs HPFS
I wish I could buy a copy of Warp 4 -grumble-
Say hi to Yaz and bid him drop by here :)
-drl
|
Post #70,926
12/26/02 1:01:04 PM
|
3 copies now on eBay. IBM has it too for $180.
|
Post #72,155
1/3/03 1:54:53 AM
|
consider picking up eCS
it is more up to date than Warp 4. [link|http://www.ecomstation.com/template.phtml?url=automated/news/eCS%201.1%20Release%20Candadite%20[lb]1%20being%20uploaded%20for%20UP%20subscribers.html&title=eCS%201.1%20Release%20Candadite%20#1%20being%20uploaded%20for%20UP%20subscribers|Release Candidate 1] of version 1.1 was recently put online for "upgrade protection customers".
Darrell Spice, Jr.
[link|http://home.houston.rr.com/spiceware/|SpiceWare] - We don't do Windows, it's too much of a chore
|
Post #70,526
12/23/02 2:32:37 PM
|
Er, I did say two volumes...
cwbrenn wrote:
if UFS is a superior filesystem but the sloppy programs you rely on don't like, then you'll gravitate towards the filesystem that the sloppy programs you rely on will use.
When you're preparing the hard drive for installation, you create a small HFS+ volume and a large UFS one. Subsequently, in the rare event of your installing something to the UFS one and it not working, you install it over again to HFS+ and file a bug report.
I believe I've now said that twice. The reasons should be apparent.
I loved HPFS.
Then, if you dual-booted, then you had both HPFS and FAT. If you didn't, it was silly to keep much of your hard drive as FAT.
This doesn't mean I disagree with your comments regarding the dangers of the "technopeasant faith" (though I do find that term more than a little arrogant).
Remember: A gentleman tries to never give offence accidentally. ;->
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #70,520
12/23/02 12:52:09 PM
|
Partial agreement here
There are a lot of clueless, non-technical Mac users. After all, this was the market the Apple sold into, the folks who didn't like to fiddle with their computers.
With the advent of OS X, these folks have become very obnoxious and seem to fall into two camps:
- The ones who hate OS X, because it's 'not Mac-like' and 'UNIX sucks' and so on and on... - The ones who like OS X, but think since it's Apple and UNIX-compatible it must be the 'One True UNIX'.
Fortunately, OS X has drawn in new users who would have never even glanced in Apple's direction before. These also seem to fall into two camps:
- Ex-Windows users who are tired of fussing with their computers and just want stuff to work (and are the major target of the 'Switch' ads) - Technical folks, like scientists and coders, who previously had to have two computers (or a single, dual-booting computer) running *NIX and Windows because their employers had standardized on tools like Office.
I thought I knew where I was going with this, but my wife interrupted me in mid-thought...
Anyway, there *are* a lot of assholes out there and zealotry (of any stripe) seems to be the trait they hold in common.
I appreciate your further explanation of the UFS vs. HFS+ issue, by the way. You explained it quite clearly and simply.
At some point, I hope OS X will have the same choice of filesystems that a Linux system does.
In the meantime, Apple is at least beginning to shed some of their legacy cruft. The 'proprietary printing architecture' you referred to in a previous response has been replaced by CUPS as of OS X 10.2.
Good discussion so far. Lots of light, very little heat.
Tom Sinclair
"Everybody is someone else's weirdo." - E. Dijkstra
|
Post #70,617
12/24/02 1:26:36 AM
|
Thanks for that
Tom wrote:
Good discussion so far. Lots of light, very little heat.
Thank you for reading it in the spirit intended. I admit to harbouring some lingering resentment at some of the treatment I received on the aforementioned OS X mailing list ("x4u"), what with the anonymous hatemail and gratuitous Church of Steve "witnessings" I kept being subjected to, on grounds of insufficient ideological purity and excessive technical aptitude. But of course that wasn't anybody here, doing that.
The worst aspect of that treatment was that it was willfully unclear on the basic concept of why I was (and wasn't) participating, there. I mentioned the guy who incessantly tried to troll me into MacOS vs. Linux advocacy debates (in which I had no interest). Since he kept trying nonetheless, I tried to explain to him that, because of their different licence model, Linux users had no stake in his choice of operating system: Their core interests were simply not subject to zero-sum popularity contests with other OS environments. I suggested that, if it really bothered him that much that I was posting (useful, correct) OS X technical answers from "rick@linuxmafia.com" using Linux console mailers, he should seek some more private form of therapy, as the barrage of non-sequitur OS advocacy was both clueless and annoying.
And, of course, he didn't listen. I quietly killfiled him, which removed the problem from my view, at least.
That gentleman was hardly alone in attributing imaginary, fanatical nag-people-style OS advocacy to Linux users who've merely answered technical questions and sport X-Mailer headers and .signature blocks to match. Sam Varghese, an excellent Australian IT industry analyst and reporter, recently interviewed me for an upcoming series of articles in The Age and the Sydney Morning Herald. Here's part of my interview:
You've been described as a rabble rouser. True? False?
The rumours are true, but I raise only top-quality rabble.
We have a saying in the Linux community: "If you don't like the news, make some of your own." Here in the San Francisco / Silicon Valley area, a number of us found to our surprise that we're pretty good at Linux publicity events, and have done a number of them. We had a huge summer picnic in celebration of Linux's 10th birthday in 2001, and had such a good time that we repeated it this past summer, too. In 1998, we had several PR events where we good-naturedly capitalised on Microsoft marketing efforts to show up in public and on-camera, such as during the product launch of Windows 98, where we gave out hundreds of Linux CD-ROMs to people interested in installing them (and pointing out where the stores were also selling Linux boxed sets).
One of the surprises of those years was that we seemed to be significantly more effective at marketing than Microsoft Corporation was, and with no funding at all.
How many people have you converted to Linux? Take the case of any one individual you've converted to Linux. Let's have a rundown of the process.
This is my golden opportunity to embarrass my friend Bill Schoolcraft, so I'm going to run with it. Bill was a professional industrial welder with no particular computer expertise, when he noticed Linux gatherings and started attending them to see what it was about. I was one of the old-timers he learned from, and I successfully badgered him to take extensive notes. I think it was when I kept using the metaphor of software as tools, and stressing the difference good tools and mastery of them can make, that he really "got" the point of the Unix way of thinking. Now, six years later, he's a senior Linux and (Sun Microsystems) Solaris administrator, and earns a good living at it.
But I don't seek to "convert" people in the sense of trying to interest those who prefer something else. Why would I? More about that, below.
Do you think you could achieve more if your advocacy was a little less strident?
I'm reminded of a story about the 19th century US public speaker and political figure Robert G. Ingersoll, who was wildly popular with the public but inspired influential "establishment" detractors by being openly non-religious: Some reporters came to visit, and asked him about the rumours that his son had gotten drunk during a wild party and fell unconscious under the table. Ingersoll paused for effect, then started: "Well, first of all, he didn't fall under the table. And he wasn't actually unconscious. For that matter, he didn't fall. And there wasn't any party, and he didn't have anything to drink.... And, by the way, I don't have a son."
So: It's not what I'd call strident, and I don't do advocacy. At least, not in the usual sense of the term.
The usual sort of OS advocacy is what the "Team OS/2" crowd used to do: They knew that their favourite software would live or die by the level of corporate acceptance and release/maintenance of proprietary shrink-wrapped OS/2 applications. They lobbied, they lost, IBM lost interest, and now their favourite OS is effectively dead.
But Linux is fundamentally different because it and all key applications are open source: The programmer community that maintains it is self-supporting, and would keep it advancing and healthy regardless of whether the business world and general public uses it with wild abandon, only a little, or not at all. Because of its open-source licence terms, its raw source code is permanently available. Linux cannot be "withdrawn from the market" at the whim of some company -- as is slowly happening to OS/2.
Therefore, Linux users are not in a zero-sum competition for popularity with proponents of other operating systems (unlike, say, OS/2, MS-Windows, and Mac OS users). I can honestly wish Apple Computer well with their eye-pleasing and well-made (if a bit slow and inflexible) Mac OS X operating system: Wishing them well doesn't mean wishing Linux ill.
Note that all of the identifiable "Linux companies" could blow away in the breeze like just so much Enron stock, and the advance of Linux would not be materially impaired, because what matters is source code and the licensing thereof, which has rather little to do with any of those firms' fortunes.
Further, and getting back to your original point, I honestly don't care if you or anyone else gets "converted" to Linux. I don't have to. I'm no better off if you do; I'm no worse off if you don't.
What I do care about is giving making useful information and help available to people using Linux or interested in it. Why? Partly to redeem the trust shown by others when they helped me. Partly because it's interesting. Partly because researching and then teaching things I usually start knowing little about is the best way I know to learn. And partly out of pure, unadulterated self-interest: People knowing your name is at least a foot in the door, in the IT business.
As to stridency, there is a well-known problem of all on-line discussion media: Some people become emotionally invested in positions they've taken in technical arguments, and gratuituously turn technical disagreements into verbal brawls. And unfortunately they tend to be drawn to people like me who attempt to state their views clearly and forcefully. It's as if you were to say "I like herring" and thereby summon every dedicated herring-hater within a hundred-mile radius. The problem comes with the territory.
But that causes occasional unpleasantness and back-biting among some on-line Linux users, not an aspect of "advocacy", which isn't something we have much use for, generally -- especially where the term refers to convincing the unwilling.
What do you hope to achieve by this advocacy?
I hope to have fun, to learn, to help those willing to "help themselves" by learning about their systems, to become qualified to work professionally with better and more-interesting technology, to spend more of my time around people I enjoy, and to improve my quality of life by improving the grade of tools I work with.
Please note that "converting users to Linux" is nowhere on that list.
If you lived here, you'd be $HOME already.
|
Post #70,630
12/24/02 7:59:39 AM
|
Very True (TANGENT)
The license model Linux uses makes almost bulletproof for the kind of lousy stuff that happened to OS/2. That said, it's the "almost" that I believe creates the worst part of the Linux community.
Linux will live or die by two things:
1. The sanctity of its licenses, and 2. The presence of effective developers within the community
The second point sort of hinges on the first. As long as the GNU and other licenses in a Linux distro remain enforceable, you can't have a company like IBM or Microsoft bury the software. Even if no commercial entity on earth wanted to touch it, so long as the source remains accessible it can continue to be developed... which is NOT the case with OS/2, even though some people have managed to do some amazing things for it despite that fact.
This creates most of the rabid paranoia surrounding all of the "what license do you use" debtes.
The second point is a bit more psychological, though. The more people who are new to linux who show up on the scene, the larger and louder the nutjob community is going to get... because Linux is a community-developed effort, and new users will initially have nothing to contribute. And those of us who have no programming skills will have nothing substantial at all to contribute to Linux in any way, shape or form. Thus, newbies will be seen as "freeloaders" by parts of the Linux community, and ranks of the nutjob elite will swell.
Why? Because while Linux is almost bulletproof, those two points (the sanctity of the license and the presence of developers in the community) need to be protected, and protected viciously. If a law like the DMCA suddenly makes the GPL illegal, Linux is screwed. If there are so many unskilled newbies (like me) using Linux that it's impossible to FIND the developers (i.e., terrible signal to noise ratio) then Linux development will founder.
Alas, the necessity of both makes the Linux world seem almost alien to a lot of people...
"We are all born originals -- why is it so many of us die copies?" - Edward Young
|
Post #70,694
12/24/02 1:53:08 PM
|
Proprietary forks and threat models
cwbrenn wrote:
As long as the GNU and other licenses in a Linux distro remain enforceable, you can't have a company like IBM or Microsoft bury the software.
I'm not sure you've thought this through, thoroughly: As we say in the security field, have you considered what's the threat model? What do you feel that threat model is? You didn't specify.
It sounds like you're saying the obligation to release source changes (when people distribute modified binaries) might prove legally unenforceable. If so, then the only consequence would be that people could lawfully distribute proprietary forks.
Wow, that's what killed FreeBSD, NetBSD, and OpenBSD, right?
Oops.
If a law like the DMCA suddenly makes the GPL illegal....
OK, the Anti-Commie-Pinko Statute of 2003 comes into force, providing that mandatory source-code disclosure provisions are unenforceable and that those who attempt to use them are to be publicly impaled. People who have been producing GPLed codebases fork their own codebases, issuing new instances under two-clause BSD licences. Curse their devilish cleverness!
The Anti-Commie-Pinko Statute of 2004 follows that, stating that all software licences that don't require payment of money are unenforceable. The aforementioned malefactors continue to distribute their software anyway, and just don't seek to enforce their terms.
The Anti-Commie-Pinko Statute of 2005 comes last, and bans copyright law's applicability to software that is distributed without an obligation to pay money for it. The aforementioned malefactors declare their works to be in the public domain.
We could go on, I suppose.
As I was saying to Todd, the essential characteristic of open-source software isn't any specific licence, but rather the right to fork. You're going to have a difficult time concocting a credible dystopia where that's barred by law.
...Thus, newbies will be seen as "freeloaders" by parts of the Linux community, and ranks of the nutjob elite will swell.
Again, and the consequence is...?
The Linux coder community has been self-sustaining for a decade. And they're really, really good at filtering out noise. Increase the irrelevant noise by a factor of ten, and they'll still filter it out.
Experiment: Join LKML (the Linux kernel mailing list), and start deliberately posting random crap about how Linux distributions aren't friendly enough and that developers need to add binary handlers for VBA, and things like that. Attempt to do that for a week.
It'll come as no surprise that you'd be pretty much universally killfiled, right? However, it might surprise you that you'd be silently removed from the subscriber roster in fairly short order, and find that pretty much all subsequent subscription attempts would mysteriously fail.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #70,899
12/26/02 7:01:53 AM
|
Dystopian Vision
Uh... you'll recall I was talking about the reasons why there are nutjobs in the Linux community right? My entire "if the DMCA makes the GPL illegal then Linux is doomed" scenario is taken from a "nutjob raving" I've seen posted fairly regularly. Since when did logic ever enter into the equation when dealing with the lunatic fringe, especially when dealing with operating systems?
As my second comment, well you're right -- the chance of the unwashed masses ever drowning out a developers mailing list is very, very small. However, when it reaches the point where developers have to sequester themselves into tiny rooms to hide from the keening of the great clueless throngs of their userbase, you can go ahead and torch that bazaar and build another cathedral, because you've effectively lost the open development model.
"We are all born originals -- why is it so many of us die copies?" - Edward Young
|
Post #70,957
12/26/02 4:34:53 PM
|
Re: Dystopian Vision
cwbrenn wrote:
However, when it reaches the point where developers have to sequester themselves into tiny rooms to hide from the keening of the great clueless throngs of their userbase, you can go ahead and torch that bazaar and build another cathedral, because you've effectively lost the open development model.
The hypothetical keening masses would be seeking handholding technical support, which the developers would not be offering. Fortunately for both parties, (1) the former don't even know where to find the latter, and (2) other people entirely tend to see this situation as a business opportunity, and advertise exactly those services.
I've often really liked the "a la carte" model for support/training/administration, especially when I was operating a consulting business. ;->
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #70,724
12/24/02 7:15:12 PM
|
Accident waiting to happen
Look, I'll start out by pointing out that I hate HFS+. Its a lame file system.
But it supports resource forks. UFS doesn't. You set up two volumes, one UFS and one HFS+ and you're quite likely to screw up and try to put a file with a resource fork on the UFS drive and it will get broken.
I run all HFS+ drives. Not my pref - I wish they would move the whole mess to UFS but the fact is that I (and most long time mac users) have a bunch of old files that have interesting data in the resource forks (like old Quicken files).
So from a safety standpoint, its a bad idea to have a UFS drive on your machine for general data storage.
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #70,742
12/24/02 11:03:53 PM
|
You found a way to break your files? Sorry to hear.
ToddBlanchard wrote:
But [HFS+] supports resource forks. UFS doesn't.
It does here, and everywhere else I've used it. Sorry to hear about your problem.
I think NeXT, Inc.'s UFS "bundles" solution is brilliant, in fact, and adapted very cleanly to handling resource forks (as resources files with prefix "._" to the regular file's name, rather than forks). It's proven very reliable. I'm curious about which method you found to damage such files. Possibly, you tried to move base data files without the matching dotfiles?
I'm really not sure how you're managing to "get broken" your file resources in just "putting them" on the UFS filesystem, but personally think your solution of avoiding the superior filesystem option entirely is about as pitful as not reading e-mail from strangers because of the alleged threat from file attachments. Whatever works for you<tm>, though.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #70,757
12/25/02 5:29:55 AM
|
You're wrong again as usual - stick to Linux
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #70,764
12/25/02 7:05:12 AM
|
Verily, the technopeasant priesthood has spoken
ToddBlanchard wrote:
You're wrong again as usual - stick to Linux.
And here we have the Cupertino religion in a nutshell, folks.
But, for the benefit of those who actually aspire to understand technology, rather than just parrot views popular in some tight little community, here's how to carelessly screw up your resource forks on UFS:
Simple, just forget what you know about how they're stored. And then shoot yourself in the foot, accordingly. E.g., move filenames using cp, mv, rsync, tar, cpio, etc., without bothering to grab and move the ._filenames that go with them (in the minority of cases where files on MacOS X still use resource forks).
On the other hand, if you handle such files in a clueful fashion, reflecting understanding of how they're stored, you won't shoot those holes in your foot. E.g., tar up the directory containing the files to be copied, rather than just grabbing the file (data fork) and stupidly leaving the related dotfile (resource fork) behind.
But if the concept of getting the entire file and not leaving half of it behind sounds too difficult to deal with, go back to your MS-Outlook on HFS+ and switch your brain off. It's a whole lot easier than thinking.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #70,772
12/25/02 9:10:39 AM
|
You don't actually *use* the system, do you.
You must have awful self esteem to put so much effort into trying to make other people feel small. Should I conclude you to be a typical linux advocate, I'd want nothing to do with the thing.
Anyhow, you're still wrong. I mean, - you did go on an on about not using "proprietary" systems. So why you feel qualified to pontificate on a system you don't use daily is beyond me.
I don't use linux much. Not often enough to feel qualified to give advice on best practices for setting it up. So if asked about best practices on linux - I'd likely say "I don't know".
I'm guessing you've never uttered those words in your life.
Anyhow, there are a lot of holes in your knowledge here.
One, tar (the one that comes with OS X) doesn't grab resource forks - you need to use hfstar - available from [link|http://www.metaobject.com|http://www.metaobject.com]
Two, I didn't see the bit about preserving the type and creator codes - wanna cover that again? Because a file without extension, type, and creator, is pretty much useless - no application will open it unless you assign one from the finder and its quite possible you don't know either.
Three, it is in copying files from HFS+ volumes to UFS volumes using the supplied command line tools that they get broken (the finder I think does the conversions but who uses the finder?).
Sure, you can - with care - use both kinds of volumes. But accidents can happen. In my experience, if they can, they do. Best not to take chances.
If you really want to understand how this works - here's a good paper. [link|http://www.mit.edu/people/wsanchez/papers/USENIX_2000/|http://www.mit.edu/p...pers/USENIX_2000/]
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #70,815
12/25/02 3:34:31 PM
|
Unix file basics
Sure, you can - with care - use both kinds of volumes
Care is also known as "technical competence". Quite a few MacOS people tend not to esteem it very highly, I've noticed.
No, I don't use Mac OS X "daily", because our Mac OS X boxes are primarily my wife's systems, not mine. I used them more frequently during earlier beta cycles, when they were more interesting.
tar (the one that comes with OS X) doesn't grab resource forks
Allow me to introduce you to Unix, where everything is a file. Resources on UFS are thus stored as dotfiles as adjuncts to the regular file (the former "data fork"), rather than as HFS-type forks. Therefore, if you wish to use tools like tar, you need to do so in such a way as to ensure that you get all of the file, and not leave part on the floor.
If you can't figure out how to do that correctly, my condolences, and you might be better off sticking with your MacOS ghetto stuff's lack of sharp edges.
Should I conclude you to be a typical linux advocate
I'm not a "Linux advocate" in the sense of attempting to convince anyone else to use it, who doesn't wish to. You could have read that fact earlier. Should I have used fewer syllables, or perhaps pictograms?
So if asked about best practices on linux...
I would say "Hire someone who actually wants you as a customer, and the best of luck to you, elsewhere."
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #70,828
12/25/02 5:48:00 PM
|
HFS File basics
"Care is also known as "technical competence". Quite a few MacOS people tend not to esteem it very highly, I've noticed."
Apparently neither do so-called unix people who think they understand all *nix's because they understand one. Because you're not competent on this OS and you're giving out bad advice - which is really irresponsible. Because while I can tell its crap advice, others less competent might make the mistake of taking your fantasy posts as facts and suffer data loss.
"Resources on UFS are thus stored as dotfiles as adjuncts to the regular file (the former "data fork")"
Yes this is what the Finder does when its asked to move an HFS file to a UFS volume. Its in the paper I cited above. But the command line tools don't do this.
Allow me to introduce you to HFS+ where, if you have had a Mac for any period of time, most of your files are located.
Should you do the typical unix users thing - grab a terminal window and begin rearranging things with your favorite old tools, you stand a better than even chance of fucking up some of your important historical data (old Macintax or Quicken files for instance). Technical competence? Everything you think you know about the behvaior of mv, cp, and tar, is not quite right.
Let me also point out that should you desire to move them to UFS (which on OS X is actually not as cool of a file system in a lot of respects - lack of journaling for one, the fact that the rest of the OS is actually tuned for HFS+ another) you'll find that a lot interesting meta information about your file will be lost forever - like type and creator code. You'll find that the tar command on MacOS X that ships with the system does not properly add resource forks to files. Because on HFS+, they're not just files with name extensions.
So for the lurkers - this guy is giving out bad advice on a system he doesn't even use regularly. Ask someone else.
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #70,875
12/26/02 1:00:34 AM
|
Envoi
You know, Todd or whoever/whatever you are, I simply have no interest in arguing with you. But in your spare time, you could try making a tarball of an entire directory (including dotfiles) on your (Mac OS 10-dot-whatever host's) UFS filesystem, and then untarring it into somewhere else on the same filesystem.
Gosh, what a surprise! All metadata are intact. I just ssh'd into my wife's iBook and [re-]tested that, and of course it (still) works fine. You'd almost think that understanding what your tools do, rather than indulging religious faith, is a reasonable strategy.
Whatever makes you happy, though; go crazy with it. But I believe we're now done. Time for you to go back to whatever it is that you do.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #70,890
12/26/02 3:04:16 AM
|
You could run the correct test
I don't use OS X, so I don't know what will or won't happen. But the fact is that you didn't test the scenario that Todd was complaining about.
Take files on an HFS filesystem. Tar them using standard unix tools. Untar them on a UFS filesystem. See if they are left intact.
Todd's claim is that they won't be, and the reason that Todd offers for why they won't be seems rather reasonable to me. Which suggests that you misunderstood what he was saying, and suggests that he actually does understand how the filesystems and associated filesystem tools work.
Cheers, Ben
"Career politicians are inherently untrustworthy; if it spends its life buzzing around the outhouse, it\ufffds probably a fly." - [link|http://www.nationalinterest.org/issues/58/Mead.html|Walter Mead]
|
Post #70,896
12/26/02 3:54:27 AM
|
This would be my cue to ask this: (new thread)
Created as new thread #70895 titled [link|/forums/render/content/show?contentid=70895|This would be my cue to ask this:]
"Ah. One of the difficult questions."
|
Post #70,897
12/26/02 5:45:35 AM
|
Exactly - thanks
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #70,925
12/26/02 12:24:36 PM
|
I am understanding unix&tar has different capabilities?
Depending on the file system and on HFS it cannot traverse directories and files in the same fashion as UFS? If I have a NTFS directory using "crappy file name conventions.doc" when I tar it up it is fine but untarring it under unix would get several file not found messages? thanx, bill
will work for cash and other incentives [link|http://home.tampabay.rr.com/boxley/resume/Resume.html|skill set]
You think that you can trust the government to look after your rights? ask an Indian
|
Post #70,962
12/26/02 4:40:42 PM
|
Not sure what you are asking
I don't know much about NTFS.
But I wouldn't try to tar up any classic style mac files and expect them to come out OK anywhere. Untarring on the mac is generally fine though.
You want to get a copy of hfstar for your OS X boxes. [link|http://www.metaobject.com|http://www.metaobject.com] has one (maybe look in community).
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #71,064
12/27/02 7:39:16 AM
|
No
I am saying that the behaviour that Todd is talking about is that standard Unix tools know nothing about turning metadata which is part of HFS into extra files that will be interpreted on other filesystems as that metadata. That is it turns one HFS file to one UFS file, which means that you lose all associated metadata that the HFS filesystem stores directly and UFS does not.
Cheers, Ben
"Career politicians are inherently untrustworthy; if it spends its life buzzing around the outhouse, it\ufffds probably a fly." - [link|http://www.nationalinterest.org/issues/58/Mead.html|Walter Mead]
|
Post #71,073
12/27/02 10:32:17 AM
|
my point was Nix tools do not understand all file systems
so do not use where not applicable. thanx, bill
will work for cash and other incentives [link|http://home.tampabay.rr.com/boxley/resume/Resume.html|skill set]
You think that you can trust the government to look after your rights? ask an Indian
|
Post #71,079
12/27/02 11:39:27 AM
|
You understand wrong
tar will create and extract archives in which filenames have embedded spaces. No problems. \r\n\r\n Where you may run into problems is with processing that uses " " as an inter-file seperator (man $( basename $SHELL) & search for IFS). Generally this situation is encountered in the form of a processing list, or with find / xargs process pipes: \r\n\r\n \r\n\r\nfor file in *\r\ndo\r\n echo "Now I'm going to do something with $file"\r\ndone\r\n \r\n \r\n\r\n ...tends to turn "My Documents" into "My" and "Documents". \r\n\r\n For find & xargs, the solution is to generate and expect null seperators: \r\n\r\n \r\n\r\nfind . -type f -print0 | xargs -0 foo\r\n \r\n \r\n\r\n I'm not quite sure where Todd's going with his explanation, but if Mac OS X is storing metadata in some form of file and .file, both extant in the same directory, tar should pick these up just fine. My understanding is that the filesystem is being used to store information in "special" files, but these files don't have other unusual characteristics other than their significance to the user shell. The trick is to use, say, "." (PWD) rather than "*" (glob of all files matching anything, so long as it's not a leading "."). But I freely admit a virtually complete lack of exposure to OS X.
--\r\n Karsten M. Self [link|mailto:kmself@ix.netcom.com|kmself@ix.netcom.com]\r\n [link|http://kmself.home.netcom.com/|http://kmself.home.netcom.com/]\r\n What part of "gestalt" don't you understand?\r\n [link|http://twiki.iwethey.org/twiki/bin/view/Main/|TWikIWETHEY] -- an experiment in collective intelligence. Stupidity. Whatever.\r\n \r\n Keep software free. Oppose the CBDTPA. Kill S.2048 dead.\r\n[link|http://www.eff.org/alerts/20020322_eff_cbdtpa_alert.html|http://www.eff.org/alerts/20020322_eff_cbdtpa_alert.html]\r\n
|
Post #71,091
12/27/02 1:03:47 PM
|
well lets try it out
[105-112:~] boxley% mkdir foo [105-112:~] boxley% cd foo [105-112:~/foo] boxley% cat /dev/null > foo at bar cat: at: No such file or directory cat: bar: No such file or directory [105-112:~/foo] boxley% ls foo [105-112:~/foo] boxley% [105-112:~/foo] boxley% cat /dev/null >"foo at bar" [105-112:~/foo] boxley% ls foo foo at bar [105-112:~/foo] boxley% ls -l total 0 -rw-r--r-- 1 boxley staff 0 Dec 27 13:00 foo -rw-r--r-- 1 boxley staff 0 Dec 27 13:02 foo at bar
when the nix os tries to untar the files with embedded spaces it cannot write the file to the disk because the OS does not recognise embedded spaces unless quoted which is a problem to untar unless you specifically know what file you need. That is why using a snmp notation in file names fooAtBar is a useful practice. thanx, bill
will work for cash and other incentives [link|http://home.tampabay.rr.com/boxley/resume/Resume.html|skill set]
You think that you can trust the government to look after your rights? ask an Indian
|
Post #71,095
12/27/02 1:37:05 PM
|
You know dam well...
Box, You know dam well UNIX "sees" spaces as delimiters... you have to "backslash" them to take away the special meaning. You are really a Joker man... This issue has nothing to do with wether or not TAR can deal with embedded spaces... watch this... [gfolkert@paladin demo]$ cat /dev/null > foo bar\n[gfolkert@paladin demo]$ cat /dev/null > foo\\ at\\ bar\n[gfolkert@paladin demo]$ cat /dev/null > "foo not bar"\n[gfolkert@paladin demo]$ ls -l\ntotal 0\n-rw-r--r-- 1 gfolkert users 0 Dec 27 13:29 foo\n-rw-r--r-- 1 gfolkert users 0 Dec 27 13:29 foo at bar\n-rw-r--r-- 1 gfolkert users 0 Dec 27 13:29 foo not bar\n[gfolkert@paladin test]$ You see the equivalence... now for tar[gfolkert@paladin demo]$ tar cvf demo.tar foo*\nfoo\nfoo at bar\nfoo not bar\n[gfolkert@paladin demo]$ ls -l\ntotal 12\n-rw-r--r-- 1 gfolkert users 10240 Dec 27 13:30 demo.tar\n-rw-r--r-- 1 gfolkert users 0 Dec 27 13:29 foo\n-rw-r--r-- 1 gfolkert users 0 Dec 27 13:29 foo at bar\n-rw-r--r-- 1 gfolkert users 0 Dec 27 13:29 foo not bar\n[gfolkert@paladin demo]$ tar tvf demo.tar\n-rw-r--r-- gfolkert/users 0 2002-12-27 13:29:40 foo\n-rw-r--r-- gfolkert/users 0 2002-12-27 13:29:40 foo at bar\n-rw-r--r-- gfolkert/users 0 2002-12-27 13:29:40 foo not bar\n[gfolkert@paladin demo]$ mkdir test\n[gfolkert@paladin demo]$ cd test/\n[gfolkert@paladin test]$ tar -xvf ../demo.tar foo\\ at\\ bar\nfoo at bar\n[gfolkert@paladin test]$ ls -l\ntotal 0\n-rw-r--r-- 1 gfolkert users 0 Dec 27 13:29 foo at bar\n[gfolkert@paladin test]$ tar -xvf ../demo.tar "foo not bar"\nfoo not bar\n[gfolkert@paladin test]$ ls -l\ntotal 0\n-rw-r--r-- 1 gfolkert users 0 Dec 27 13:29 foo at bar\n-rw-r--r-- 1 gfolkert users 0 Dec 27 13:29 foo not bar\n[gfolkert@paladin test]$tar -xvf ../demo.tar foo bar\nfoo\ntar: bar: Not found in archive\ntar: Error exit delayed from previous errors\n[gfolkert@paladin test]$ ls -l\ntotal 0\n-rw-r--r-- 1 gfolkert users 0 Dec 27 13:29 foo\n-rw-r--r-- 1 gfolkert users 0 Dec 27 13:29 foo at bar\n-rw-r--r-- 1 gfolkert users 0 Dec 27 13:29 foo not bar\n[gfolkert@paladin test]$ See that... THAT is what you are talking about not being supported. Bah...
[link|mailto:curley95@attbi.com|greg] - Grand-Master Artist in IT [link|http://www.iwethey.org/ed_curry/|REMEMBER ED CURRY!] [link|http://pascal.rockford.com:8888/SSK@kQMsmc74S0Tw3KHQiRQmDem0gAIPAgM/edcurry/1//|ED'S GHOST SPEAKS!] | Your friendly Geheime Staatspolizei reminds: [link|http://www.wired.com/news/wireless/0,1382,56742,00.html| Wi-Fi Terrorism] comes with an all inclusive free trip to the local Hoosegow! | I'll never tell, my *overly-red* lips are sealed! *wink* *wink* |
|
Post #71,105
12/27/02 3:53:29 PM
|
point==missed
If you know the file name you can sort it out by calling for it in a manner the os will understand. If you are moving a huge subdir and you have a couple of badly named files in there they will not get moved over at the command line, you will have to parse it which is a little more tedious than tar this and untar this. That was the point I was trying to make. thanx, bill
will work for cash and other incentives [link|http://home.tampabay.rr.com/boxley/resume/Resume.html|skill set]
You think that you can trust the government to look after your rights? ask an Indian
|
Post #71,107
12/27/02 4:03:58 PM
|
Windows, MAC OSX, MacOS9.x, *NIX
All have that same tendancy.
Try and move a file under windows that is named screwy... like "*A* Stoopid question\\answer/session? Me & Him .umx"
There isn't single OS barring VMS or OS400 that could address it easily.
[link|mailto:curley95@attbi.com|greg] - Grand-Master Artist in IT [link|http://www.iwethey.org/ed_curry/|REMEMBER ED CURRY!] [link|http://pascal.rockford.com:8888/SSK@kQMsmc74S0Tw3KHQiRQmDem0gAIPAgM/edcurry/1//|ED'S GHOST SPEAKS!] | Your friendly Geheime Staatspolizei reminds: [link|http://www.wired.com/news/wireless/0,1382,56742,00.html| Wi-Fi Terrorism] comes with an all inclusive free trip to the local Hoosegow! | I'll never tell, my *overly-red* lips are sealed! *wink* *wink* |
|
Post #71,145
12/27/02 6:56:50 PM
|
ed zachery
will work for cash and other incentives [link|http://home.tampabay.rr.com/boxley/resume/Resume.html|skill set]
You think that you can trust the government to look after your rights? ask an Indian
|
Post #71,148
12/27/02 7:46:43 PM
|
Windows has the bigger problem...
...in that those configuration parameters and file attributes and can't be copied by way of the standard utilities because they are in the Registry.
|
Post #71,146
12/27/02 7:05:36 PM
|
Wrong
You're confusing IFS with the OS's inherent ability to manipulate files with spaces. When working through the shell, these names need to be quoted. Tar handles 'em fine: \r\n\r\n \r\n\r\n[karsten@academy:karsten]$ cd tmp\r\n[karsten@academy:tmp]$ mkdir boxley\r\n[karsten@academy:tmp]$ cd boxley\r\n[karsten@academy:boxley]$ touch "a file with spaces"\r\n[karsten@academy:boxley]$ touch "another of the same"\r\n[karsten@academy:boxley]$ ls \r\na file with spaces another of the same\r\n[karsten@academy:boxley]$ cd ..\r\n[karsten@academy:tmp]$ tar czvf boxley.tar.gz boxley/\r\nboxley/\r\nboxley/a file with spaces\r\nboxley/another of the same\r\n[karsten@academy:tmp]$ mkdir untar\r\n[karsten@academy:tmp]$ cd untar \r\n[karsten@academy:untar]$ tar xzvf ../boxley.tar.gz \r\nboxley/\r\nboxley/a file with spaces\r\nboxley/another of the same\r\n[karsten@academy:untar]$ ll\r\ntotal 1\r\ndrwxr-sr-x 2 karsten karsten 128 Dec 27 18:02 boxley\r\n[karsten@academy:untar]$ ll boxley/\r\ntotal 0\r\n-rw-r--r-- 1 karsten karsten 0 Dec 27 18:02 a file with spaces\r\n-rw-r--r-- 1 karsten karsten 0 Dec 27 18:02 another of the same\r\n \r\n
--\r\n Karsten M. Self [link|mailto:kmself@ix.netcom.com|kmself@ix.netcom.com]\r\n [link|http://kmself.home.netcom.com/|http://kmself.home.netcom.com/]\r\n What part of "gestalt" don't you understand?\r\n [link|http://twiki.iwethey.org/twiki/bin/view/Main/|TWikIWETHEY] -- an experiment in collective intelligence. Stupidity. Whatever.\r\n \r\n Keep software free. Oppose the CBDTPA. Kill S.2048 dead.\r\n[link|http://www.eff.org/alerts/20020322_eff_cbdtpa_alert.html|http://www.eff.org/alerts/20020322_eff_cbdtpa_alert.html]\r\n
|
Post #71,152
12/27/02 8:09:51 PM
|
The reason I brought this upo was recovering files from a
doze box to a linux box running rhat 7.0. Mounted a NT drive from the linux box and tarred about a gig onto the linux box tarball.tar. I ftp'd the file to a second linux box running 7.0 and untarred the file. On extraction it had problems with the type of file naming convention with spaces. I manually had to crawl thru the doze box directory by directory to get the files. That is why I brought it up. It appears to not be a problem as shown by your post. hanx, bill
will work for cash and other incentives [link|http://home.tampabay.rr.com/boxley/resume/Resume.html|skill set]
You think that you can trust the government to look after your rights? ask an Indian
|
Post #71,167
12/27/02 10:15:59 PM
|
Two possible issues I can think of...
...barring your posting commands used. \r\n\r\n Some tars have an absolute cap on filename length. I'm not sure what this is with GNU tar, but recursing long names into a directory should identify whether this exists at any of the likely intervals (256, 512, 1024, 2048, 4096 characters, etc.). I have encountered this problem, possibly on either tar, CD ROM (the iso9660 format also has a total name length cap), or NTFS. \r\n\r\n Depending on how you were selecting files for inclusion in your tar archive, you may have run into the shell quoting issues mentioned earlier. You'll find that GNU tar does allow for null-terminated filenames, to work around the problem of embedded whitespace. Some folks feed find output to tar, in which case the problem can crop up. One of the reasons I prefer tar over such alternatives as cpio and afio is that tar will handle directory recursion itself while the other tools want input fed from stdin, by means that always require heavy manpage decoding by me. Note that there are some commendable advantages to the filestructures used by cpio & afio, I'm not criticising these. \r\n\r\n Bill, you made an absolute statement of what tar could or couldn't do. It's demonstrably wrong for GNU tar. If you care to show methods, we might be able to identify the source of the problem. It's not what you've represented it to be.
--\r\n Karsten M. Self [link|mailto:kmself@ix.netcom.com|kmself@ix.netcom.com]\r\n [link|http://kmself.home.netcom.com/|http://kmself.home.netcom.com/]\r\n What part of "gestalt" don't you understand?\r\n [link|http://twiki.iwethey.org/twiki/bin/view/Main/|TWikIWETHEY] -- an experiment in collective intelligence. Stupidity. Whatever.\r\n \r\n Keep software free. Oppose the CBDTPA. Kill S.2048 dead.\r\n[link|http://www.eff.org/alerts/20020322_eff_cbdtpa_alert.html|http://www.eff.org/alerts/20020322_eff_cbdtpa_alert.html]\r\n
|
Post #71,180
12/27/02 10:43:35 PM
|
It was over a year ago so bear with me
had a NTFS box with a crossover cable connected to a linux laptop. The Drive was mounted via NFS to the /mnt mount point. Used SOSNT (son of sam nt a nfs server for nt) at the command line in bsh from the root directory used "tar cvf tarball.tar mnt" . after creating the file I disconnected the NT box and using the regular network ftp'd the tar file to a third linux box. Ran the command "tar xvf tarball.tar" I noticed the ocassional error at that point "foo file name" cannot extract file cannot extract name. So assumed it had a problem writing so went back to the source to get the files that wernt untarred manually. Now from your example it works just fine. I was using ext2 file systems on both linux boxes and RH 7.0 . Apparently what I saw belonged to another issue because as you have shown he it works just fine. Also works on darwin when I retested after your post. So I saw an abberation? thanx, bill
will work for cash and other incentives [link|http://home.tampabay.rr.com/boxley/resume/Resume.html|skill set]
You think that you can trust the government to look after your rights? ask an Indian
|
Post #71,246
12/28/02 5:01:08 PM
12/28/02 5:02:03 PM
|
Once again, Todd's explanation
You are halfway to the problem. The problem is that sometimes MacOS uses special files, and sometimes not. tar does not understand this.
We have two filesystems. One of which (HFS) stores a file with lots of hooks for metadata. One of which (UFS) only directly supports the metadata that any Unix system has.
Going from HFS to UFS, what do you do with the important metadata? Apple decided to create new files for it. Therefore you cannot move a file from HFS to UFS without creating extra files.
The tar shipped with OS X doesn't understand this. And therefore does not create the needed extra files, and loses important metadata moving from HFS to UFS.
Personally (admittedly without much serious thought) what I would have been inclined to do in Apple's situation is have 2 sets of APIs for accessing files. One of which sees a traditional HFS file with metadata (on either system). The other of which sees the associated collection of files that are needed in UFS. Yes, there are problems with that as well. But then you would at least find it easy to get the appropriate consistent behaviour across filesystems.
BTW Linux may have a similar problem some day. (May already have in fact...) There is exploration of the idea of supporting streams within files in some filesystems. Copying a file with streams to a filesystem without streams has to be done...how? I haven't exactly been following discussion on this, but the following old post illustrates the issues that have to be thought through:
[link|http://web.gnu.walfield.org/mail-archive/linux-kernel/2000-August/1275.html|http://web.gnu.walfi...-August/1275.html]
Cheers, Ben
"Career politicians are inherently untrustworthy; if it spends its life buzzing around the outhouse, it\ufffds probably a fly." - [link|http://www.nationalinterest.org/issues/58/Mead.html|Walter Mead]
Edited by ben_tilly
Dec. 28, 2002, 05:02:03 PM EST
|
Post #71,269
12/28/02 8:05:12 PM
|
Using a hammer to drive screws will hurt
Ben wrote:
The tar shipped with OS X doesn't understand this. And therefore does not create the needed extra files, and loses important metadata moving from HFS to UFS.
If you understand how files are stored, then you immediately realise why GNU tar is an inappropriate tool for such operations. A "problem" exists only if you're part of a funky computer religion that implicitly assumes you shouldn't have to understand the fundamentals of what you're doing.
BTW Linux may have a similar problem some day. (May already have in fact...)
That wouldn't be a "problem" any more than the ability to overwrite files with the "cat" utility is a "problem": Using an obviously unsuitable tool for a particular task, or otherwise aiming a gun at your foot and pulling the trigger, will inevitably have suboptimal results or worse.
So Don't Do That, Then.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #71,338
12/29/02 5:23:15 PM
|
I understand your position perfectly
You believe that all abstractions should be useless, and that nobody should be able to trust in anything without knowing the whole system perfectly.
There is a name for code produced by programmers who think like that. That name is spaghetti. And systems that are built like that inevitably become a [link|http://www.laputan.org/mud/mud.html|Big Ball of Mud].
Yes, I am part of a funky computer religion that implicitly assumes that you shouldn't have to understand the fundamentals of what you're doing - at least not all of the time. Abstractions like files and directories serve an important purpose, and the abstractions should not be broken lightly. And I am proud to say that that is a Good Thing.
Cheers, Ben
"Career politicians are inherently untrustworthy; if it spends its life buzzing around the outhouse, it\ufffds probably a fly." - [link|http://www.nationalinterest.org/issues/58/Mead.html|Walter Mead]
|
Post #71,362
12/29/02 7:17:38 PM
|
Re: I understand your position perfectly
Ben wrote:
I understand your position perfectly. You believe that all abstractions should be useless, and that nobody should be able to trust in anything without knowing the whole system perfectly.
What an absolutely fabulous straw man you have there. May I take a whack at it, too, or do you have proprietary rights?
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #71,572
12/30/02 6:16:31 PM
|
If it is not a straw man...
Then please explain how your position differs appreciably from what I described.
Regards, Ben
"Career politicians are inherently untrustworthy; if it spends its life buzzing around the outhouse, it\ufffds probably a fly." - [link|http://www.nationalinterest.org/issues/58/Mead.html|Walter Mead]
|
Post #71,811
12/31/02 10:14:16 PM
|
Re: If it is not a straw man...
Ben wrote:
If it is not a straw man, then please explain how your position differs appreciably from what I described.
If I could figure out how you derived the extremely strange view you described immediately before attributing it to me, and then went on to draw even more peculiar conclusions from it, I'd gladly tell you which wrong turn you took on that road. But it's way too damn bizarre for my tastes, and it should suffice to say "No, I most certainly don't believe that, nor does anyone, actually."
I'm willing to believe that you honestly thought that. I guess. But I'm not going to try to prove to you that I don't hold a rather odd and silly view that strike me as nothing at all like what I said.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #71,962
1/2/03 4:55:17 AM
|
Funny, I concluded the same as Ben
YTF should I have to understand how the file system works to use it?
Do I also need to know the voltages in my ram chips to make proper use of memory in my software?
And of course, I mustn't write anything using sockets unless I clearly understand the inner workings of the entire communications stack and what is actually happening on the wires.
Better not drive a car without being able to calculate the thermal energy per unit of fuel - otherwise you'll probably run out someplace. You certainly can't trust that little needle - especially if you don't know how it works.
At some point, you have to use *abstractions* in order to move forwards or you will never transcend your current level of minutiae.
Your assertion that you must know the inner workings of the file system in order to use it is stupid.
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #71,970
1/2/03 5:31:20 AM
|
Re: Funny, I concluded the same as Ben
ToddBlanchard wrote:
YTF should I have to understand how the file system works to use it?
Well, you needn't, of course. Just concentrate on watching the pretty pictures. Don't worry; be happy. Other people will take care of the technical stuff, and you can just wait for it to be delivered in idiotproofed form.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #72,164
1/3/03 4:27:19 AM
|
Little story - you might find it entertaining
I've been an avid skydiver since 1983. I've got over 24 hours of free fall time and over 1000 jumps (which means I've fallen something like 1500 miles).
When I started in the sport, before standardization of equipment, the number one cause of deaths wasn't "failed parachutes" (which was maybe 5% of problems - and today its less than a whole percent). It was "borrowed gear".
Every month I would eagerly rip open my new issue of Parachutist to read the accident reports - because its valuable (and safer) to learn from other people's mistakes.
In the early 80's skydiving was going through a transition period from outlaw practice to viable busineess. While lots of military surplus gear was still common, the first gear created strictly for pleasure jumping was beginning to appear on the market from a half dozen different companies.
The result was stuff shipped with a wide variety of "user interfaces" (handle placements). Some put the main deployment handle on a band crossing the belly. Other s on the left hip, or the right hip. The bottom of the pack was a common place while older gear had the main rip cord on the chest. Reserve ripcords were typically on the chest next to the cutaway handle for separating from the main parachute (if its trash you want it gone before opening old faithful) but left and right placement varied. Handle appearance varied as well. So you couldn't just grab a rig and know whats what. You had to ask the owner. I mean, if you don't know what it is, maybe you shouldn't jump it.
But people sometimes borrow gear because their stuff isn't packed when the plane is ready to go. So very knowledgeable skydivers would borrow something, get a quick explanation for what was where, (there's only 3 handles on a parachute, main, reserve, cutaway), and go make a jump.
And then they'd hit the ground pulling on the wrong thing. Lots of them. Gurus. Guys who had jumped everything under the sun and were as comfortable freefalling as lying in a hammock would bounce. The investigation would typically find that the guy was jumping something he had borrowed on the spur of the moment.
This cycle is repeating among the BASE jumping community as BASE gear evolves. You may remember the news story from a couple years ago of Jan Davis - one of the best and brightest - very publicly bouncing during a protest jump at Yosemite in 1999. The park officials were going to arrest the participants and confiscate their gear - so she borrowed something less nice than her regular rig. The unfamiliarity killed her (it wasn't lack of knowledge - she knew how it worked - she had hundreds of jumps on it).
The moral of this story is that variety kills. Today every single rig on the market has the handles laid out in exactly the same way and we don't have those kinds of accidents anymore. So for the same reason, I think its dangerous to mix file system types on a computer. At least until the tools evolve to properly handle the issues.
If the handles on the different file systems are different, accidents will happen.
Better knowledge is not the answer. Standardizing the interfaces is. All tools must become multi-file sytem aware or as Arkadiy says - we might as well keep sector maps on a pad of paper by our desk.
Of course, you seem to think you're bulletproof and your "knowledge" will protect you. It won't. You'll forget what file system you're on one day and slip. Like all those former skygods I used to know.
The dead ones.
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #72,212
1/3/03 10:21:21 AM
|
Excellent story. Going on the wall.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #72,237
1/3/03 11:59:25 AM
|
This my dear friend... (new thread)
Created as new thread #72236 titled [link|/forums/render/content/show?contentid=72236|This my dear friend...]
[link|mailto:curley95@attbi.com|greg] - Grand-Master Artist in IT [link|http://www.iwethey.org/ed_curry/|REMEMBER ED CURRY!] [link|http://pascal.rockford.com:8888/SSK@kQMsmc74S0Tw3KHQiRQmDem0gAIPAgM/edcurry/1//|ED'S GHOST SPEAKS!] | The Heimatland Geheime Staatspolizei reminds: [link|http://www.wired.com/news/wireless/0,1382,56742,00.html| Wi-Fi Terrorism] comes with an all inclusive free trip to the local Hoosegow! | Please visit [link|http://z.iwethey.org/forums/render/board/show?boardid=1|iwethey.anti.anti++], providing *THE* alternative to iwethey.anti-- since June 18, 2001 22:00EST | I'll never tell, my *overly-red* lips are sealed! *wink* *wink* |
|
Post #72,456
1/4/03 12:43:54 PM
|
Since you seem to be missing the point...
Here it is again.
Your attitude is that people should not use tools unless they know everything relevant to its proper usage. Your definition of what is relevant seems to be that if it might come up, then they need to know it. This is a circular definition of relevancy that justifies any particular dependency in hindsight as the user's fault for not knowing better.
Now I know that [link|http://www.joelonsoftware.com/articles/LeakyAbstractions.html|abstractions leak]. (That is one of the few articles by Joel that I agree with incidentally.) Furthermore I often find myself being in the position of being the person around who understands what abstractions leak, and why. There is no fundamental solution to that problem. Shit happens, and plumbers are needed for it.
However one mark of a programmer that you want to have around is the ability to see shit and recognize it for what it is. Leaking abstractions are signs of shit. While they may be inevitable, they are not something that you want to take lightly. Because a system built by people with too much tolerance for that inevitably degrades into a complete mess.
In this case the leaking abstraction is one of the worst kinds to have. Different filesystems use the same words for subtly different abstractions. This makes it hard for most people to even verbalize that there is a difference, let alone what it is. (When the same thing happens in speech you get classic threads where people talk past each other at length.) And when you try to combine them into one abstraction, the gaps keep on leaking past.
So yes, it happens for good reason. But it is a basic design flaw in the system. And it is not one to belittle people for getting tripped up by, nor is it one that has a simple right answer.
Cheers, Ben
"Career politicians are inherently untrustworthy; if it spends its life buzzing around the outhouse, it\ufffds probably a fly." - [link|http://www.nationalinterest.org/issues/58/Mead.html|Walter Mead]
|
Post #74,798
1/16/03 4:13:56 AM
|
Re: Since you seem to be missing the point...
Ben Tilly wrote:
Your attitude is that people should not use tools unless they know everything relevant to its proper usage.
Not quite. Rather: I mildly (and not at all insistently, given that it's really not my problem) suggest that people should take responsibility for what they do. If you choose to use tools without adequately understanding them, you can pound your thumb with a misaimed hammer. The choice of taking that risk may be reasonable; the point is to not expect pity when you screw up and hurt yourself.
Personally, I prefer education over "learning experiences". Less painful.
Remainder of your post duly ignored as furious and tedious pummeling of an irrelevant straw man. Therapeutic though, I'm sure.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #71,434
12/30/02 5:21:30 AM
12/30/02 7:52:16 AM
|
I do not think that word means what you think it means.
"Using an obviously unsuitable tool for a particular task"
One day you will have to perhaps define your version of this word.
Firing up a terminal on OS X and navigating to an HFS file system, I find that I can:
ls cat mv cp chown chgrp chmod ln rm
Within an HFS volume and across HFS volumes just fine.
IOW, it appears that the developers of OS X have taken some pains to make sure that these tools work as expected on both UFS and HFS volumes. I fact, in the Fred Sanchez paper I cited awhile back, it describes the interesting strategies used to emulate hard links, soft links, file ownership, etc, that were taken to make the system behave as expected.
Its unfortunate that the illusion is incomplete (though I remain hopeful that this will be rectified with a future release).
But sometime you are going to have to share your definition of obvious with us.
Because what is clearly obvious to me is that what you claim to be obvious is not at all obvious to a daily user of the system.
It had been some months of daily command line development (I generally use vi and makefiles when I work - so is all in the shell) before I was bitten by the tar/cp/mv issue.
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #71,440
12/30/02 8:21:08 AM
|
Hello, you must be going.
Hi there! I said we were done, and I meant that.
However, on your way back to whatever it is that you do, you might consider the merits of consulting manpages before just blithely assuming that someone has custom-modified a standard tool like GNU tar and GNU cpio/afio to perform operations they do nowhere else on the planet.
Or, if you prefer, you can ignore such considerations and bitch when reality bites you in the ass. Feel free to work out your options, somewhere.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #71,534
12/30/02 3:47:08 PM
|
This is obvious?
You wanna point out the obvious part?
Yeah, right. I don't see the letters UFS in there anywhere.
I don't see the letters HFS in there anywhere.
I imagine what you meant by "man" must have been the source code, eh?
>man tar
TAR(1) System General Commands Manual TAR(1)
NAME tar - tape archiver
SYNOPSIS tar [-]{crtux}[befhmopvwzHLPXZ014578] [archive] [blocksize] [-C directory ] [-s replstr ] file1 [file2...]
DESCRIPTION The tar command creates, adds files to, or extracts files from an archive file in tar format. A tar archive is often stored on a magnetic tape, but can be a floppy or a regular disk file.
One of the following flags must be present:
-c Create new archive, or overwrite an existing archive, adding the specified files to it.
-r Append the named new files to existing archive. Note that this will only work on media on which an end-of-file mark can be overwritten.
-t List contents of archive. If any files are named on the command line, only those files will be listed.
-u Alias for -r
-x Extract files from archive. If any files are named on the command line, only those files will be extracted from the archive. If more than one copy of a file exists in the archive, later copies will overwrite earlier copies during extration.
In addition to the flags mentioned above, any of the following flags may be used:
-b blocking factor Set blocking factor to use for the archive, tar uses 512 byte blocks. The default is 20, the maximum is 126. Archives with a blocking factor larger 63 violate the POSIX standard and will not be portable to all systems.
-e Stop after first error.
-f archive Filename where the archive is stored. Defaults to /dev/rst0
-h Follow symbolic links as if they were normal files or directories.
-m Do not preserve modification time.
-O Write old-style (non-POSIX) archives.
-o Don't write directory information that the older (V7) style tar is unable to decode. This implies the -O flag.
-p Preserve user id, group id, file mode, access and modifica- tion times if possible. The user id and group id will only be set if the user is the superuser (unless these values correspond to the user's user and group ids).
-s replstr Modify the file or archive member names specified by the pattern or file operands according to the substitution expression replstr, using the syntax of the ed(1) utility regular expressions. The format of these regular expres- sions are: /old/new/[gp] As in ed(1), old is a basic regular expression and new can contain an ampersand (&), \\n (where n is a digit) back-ref- erences, or subexpression matching. The old string may also contain <newline> characters. Any non-null character can be used as a delimiter (/ is shown here). Multiple -s expressions can be specified. The expressions are applied in the order they are specified on the command line, termi- nating with the first successful substitution. The optional trailing g continues to apply the substitution expression to the pathname substring which starts with the first character following the end of the last successful substitution. The first unsuccessful substitution stops the operation of the g option. The optional trailing p will cause the final result of a successful substitution to be written to standard error in the following format: <original pathname> >> <new pathname> File or archive member names that substitute to the empty string are not selected and will be skipped.
-v Verbose operation mode.
-w Interactively rename files. This option causes tar to prompt the user for the filename to use when storing or extracting files in an archive.
-z Compress archive using gzip.
-C directory This is a positional argument which sets the working direc- tory for the following files. When extracting, files will be extracted into the specified directory; when creating, the specified files will be matched from the directory.
-H Follow symlinks given on command line only.
-L Follow all symlinks.
-P Do not strip leading slashes (``/'') from pathnames. The default is to strip leading slashes.
-X Do not cross mount points in the file system.
-Z Compress archive using compress.
The options [-014578] can be used to select one of the compiled-in backup devices, /dev/rstN.
FILES /dev/rst0 The default archive name
SEE ALSO pax(1), cpio(1)
AUTHOR Keith Muller at the University of California, San Diego
ERRORS tar will exit with one of the following values:
0 All files were processed successfully.
1 An error occured.
Whenever tar cannot create a file or a link when extracting an archive or cannot find a file while writing an archive, or cannot preserve the user ID, group ID, file mode or access and modification times when the -p options is specified, a diagnostic message is written to standard error and a non-zero exit value will be returned, but processing will continue. In the case where tar cannot create a link to a file, tar will not create a second copy of the file.
If the extraction of a file from an archive is prematurely terminated by a signal or error, tar may have only partially extracted the file the user wanted. Additionally, the file modes of extracted files and direc- tories may have incorrect file bits, and the modification and access times may be wrong.
If the creation of an archive is prematurely terminated by a signal or error, tar may have only partially created the archive which may violate the specific archive format specification.
BSD June 11, 1996 BSD
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #71,812
12/31/02 10:20:55 PM
|
Re: This is obvious?
ToddBlanchard wrote:
You wanna point out the obvious part?
It's the part where you understand the basics of how files are stored on your filesystems, understand how GNU tar / GNU cpio work, and avoid assuming (without confirmation) that those tools perform special functions, there, that they carry out nowhere else on the planet.
Khendon's Law duly invoked. Va t'en.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #71,868
1/1/03 3:14:02 PM
|
At no time did you utter the word "obvious" - nice dodge
thus you did not answer the question.
What is obvious to me is your lack of knowledge and use of bluster to compensate.
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #71,891
1/1/03 5:34:29 PM
|
Re: At no time did you utter the word "obvious" - nice dodge
ToddBlanchard wrote, before reverting to argumentum ad hominem:
thus you did not answer the question.
Indeed, Chuckles. I was addressing the UFS/FFS topic at hand, which does not necessarily mean "answering your questions". You have perhaps confused me with some paid sidekick.
Now, run along.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #71,965
1/2/03 5:02:05 AM
|
Topic at hand - point out what you claim is "obvious" (new thread)
Created as new thread #71964 titled [link|/forums/render/content/show?contentid=71964|Topic at hand - point out what you claim is "obvious"]
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #70,950
12/26/02 4:10:28 PM
12/26/02 4:13:21 PM
|
Basic point seems to have been missed
Ben wrote:
Take files on an HFS filesystem. Tar them using standard unix tools.
That would be an obvious error, right there, and implies being clueless about how file storage works. The quoted suggestion is -- for anyone who understands how file storage works on MacOS X -- a bad idea regardless of whether you're going to untar them onto UFS or HFS+.
So, as the old joke goes, Don't Do That, Then. Part of the point of having most of your filesystem space be UFS is so that you can use standard Unix tools within the UFS majority storage (provided that you transport resources dotfiles with the related regular files). Saying that you can't reliably use such tools to move files onto HFS+ misses the point: You can't reliably use them within HFS+, either.
The Church of Steve solution is to use solely the worse filesystem, and deprive yourself completely of standard Unix filehandling tools, "because they're dangerous, and those recommending them are irresponsible".
"Fire bad, burn Lorto's finger, cause disharmony among tribe. No future in it. Stick to drawing icons on cave wall."
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
Edited by rickmoen
Dec. 26, 2002, 04:13:21 PM EST
|
Post #70,956
12/26/02 4:33:37 PM
|
Yes you have missed it
every time. You can't seem to follow the thread. Ben exactly nailed it.
"Take files on an HFS filesystem. Tar them using standard unix tools."
Can you manage to follow those directions? Apparently not. I believe cp and mv has similar issues BTW.
You say: "That would be an obvious error, right there, and implies being clueless about how file storage works."
That would be perhaps one of the more pompous and idiotic things you've ever said. Which is saying a lot.
What exactly is *obvious* about that error?
I have a commercial unix, with command line tools, that has a proprietary file system. For the most part the command line tools behave exactly as they do on any other unix. cat, mv, cp, ls, ln, all work like you would expect despite the underlying file system being HFS.
And I should obviously expect that with these tools all working as expected that the tar implementation that ships with that system would have issues with the files? Pray tell why? Clearly, the other tools have been modified to work on HFS.
It seems to me the very height of reasonable for a vendor shipping an OS with a set of file manipulation utilities would go to a bit of work to extend the utilities to work properly on their file system. The fact that they didn't is what I found really very surprising.
Fortunately, the community has answered and some extended utilities have been developed like the aforementioned hfstar, which works exactly like tar - because it is - only on hfs systems it converts the resource forks to the directory wrapper format you keep harping about on your wife's system.
Which is basically what I expected Apple would have shipped with the system and called "tar".
So shame on Apple for shipping lame assed tools like this.
But to imply that this is obvious is - well - really quite stupid.
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #70,964
12/26/02 4:48:39 PM
|
I see so to use OSX effectively
blow away your 9.x partition with all those legacy apps and data. Reformat to UFS so unix tools will work. Sounds like a winblows solution to me. Use the tool that does the job and have the wisdom to know the difference. thanx, bill
will work for cash and other incentives [link|http://home.tampabay.rr.com/boxley/resume/Resume.html|skill set]
You think that you can trust the government to look after your rights? ask an Indian
|
Post #71,529
12/30/02 3:10:18 PM
|
Never used Apple OSes in my life,
but if you need "technical competence" to copy files from one directory to another, you've got a lousy system. Better off not using it.
--
We have only 2 things to worry about: That things will never get back to normal, and that they already have.
|
Post #71,813
12/31/02 10:24:41 PM
|
Re: Never used Apple OSes in my life,
Arkadiy wrote:
Never used Apple OSes in my life, but if you need "technical competence" to copy files from one directory to another, you've got a lousy system. Better off not using it.
Yeah, don't use GNU cp; it'll allow you to accidentally overwrite valuable files 'n' stuff. People who recommend it are irresponsible. WebTV for everyone!
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #71,864
1/1/03 2:50:31 PM
|
For certain kind of user
(for root by default), cp is indeed set up to confirm file overwrite. But that's besides the point. There is a difference between faithfully executing what you're told to do and corrupting data. If I can't rely on my tools to do what I tell them too, if I have to keep in mind details like filesystems - the abstraction has been breached. I might as well have a list of disk sector allocation on paper somewhere. Ben Tilly already addressed it better than I ever could.
Think of all the scripts that use cp hoping that it works... Your /tmp uses new FS, so copying file there and back corrupts it. How many scripts keep copies of files in tmp?
--
We have only 2 things to worry about: That things will never get back to normal, and that they already have.
|
Post #71,889
1/1/03 5:28:32 PM
|
Re: For certain kind of user
Arkadiy wrote:
For certain kind of user (for root by default), cp is indeed set up to confirm file overwrite.
I'll leave it as an exercise for the reader, to determine why this is a strategic error. Shouldn't be difficult; the question comes up frequently.
But that's besides the point.
Well, no, it's not, actually.
There is a difference between faithfully executing what you're told to do and corrupting data.
ObVious: Not when faithfully executing what you're told to do corrupts data. (I'd say deletion is the extreme form of corruption, nicht wahr?)
Anyhow, this has long exceeded the point of silliness. I'll leave you with an apposite quotation from the Scary Devil Monastery, which I just ran across and found amusing:
DON'T MAKE THAT FACE WHEN I TELL YOU TO READ THE F*CKING MANUAL! IT'S GOOD FOR YOU I SAY! READ THE F*CKING MANUAL! How do you think I found out how the machine works? DID I SIT AROUND ASKING SOMEBODY FOR A FEW MONTHS?? -- Beable van Polasm
(Ya wanna argue? Fine, go find van Polasm.)
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #71,966
1/2/03 5:07:06 AM
|
Gee - got an ISBN for that manual?
Or maybe a command I can execute?
It can't be "man".
Because when I gave you the results of that, you could't point anything useful out.
I am out of the country for the duration of the Bush administration. Please leave a message and I'll get back to you when democracy returns.
|
Post #71,969
1/2/03 5:27:29 AM
|
Re: Gee - got an ISBN for that manual?
Gee, I'm sorry, if you're having that much difficulty figuring out how to understand how files are stored on your OS of choice and how your software tools work, you'll have to find someone as a tutor, as evidently some fairly detailed explanations here are not sufficing. You'll have to see someone in your local vicinity. Good luck to you.
Rick Moen rick@linuxmafia.com
If you lived here, you'd be $HOME already.
|
Post #72,462
1/4/03 12:58:00 PM
|
He had no difficulty in figuring out how the system works
In fact his description was good enough for me to understand exactly what the problem was and why it happens.
His difficulty is in figuring out where and when most users should learn that this is obvious. You tell him that he should RTFM. He is asking you which M is TFM to R. You have so far failed to come up with a shorter answer than, Become a guru first, then it will be obvious. Which seriously begs the question.
Frankly I am having the same difficulty that Todd is. I see the problem. I see why it is a problem. I see problems with virtually any attempted solution. However I also don't see any documentation that most users - even more competent technical users - would see that says this.
Let me put this another way. If I explain this problem to someone with a moderately strong technical inclination who did not know it, what can I tell them that they should learn about (short of "everything") to make this kind of thing obvious to them in the future?
Cheers, Ben
"Career politicians are inherently untrustworthy; if it spends its life buzzing around the outhouse, it\ufffds probably a fly." - [link|http://www.nationalinterest.org/issues/58/Mead.html|Walter Mead]
|
Post #72,048
1/2/03 12:53:27 PM
|
rm - the ultimate corruptor
--
We have only 2 things to worry about: That things will never get back to normal, and that they already have.
|
Post #72,224
1/3/03 11:09:42 AM
|
ouch! R.M. - the ultimate corruptor.
--
We have only 2 things to worry about: That things will never get back to normal, and that they already have.
|
Post #72,535
1/4/03 5:34:25 PM
|
rm -rf /
The TRUE ultimate corruptor.
"Many that live deserve death. And some that die deserve life. Can you give it to them? Then do not be too eager to deal out death in judgement. For even the wise cannot see all ends." - J.R.R. Tolkien, The Fellowship of the Ring.
|
Post #72,343
1/3/03 6:17:33 PM
|
isnt that the beauty of nix? It does what it is commanded to
its up to the user to determine if that is what they wanted. thanx, bill
will work for cash and other incentives [link|http://home.tampabay.rr.com/boxley/resume/Resume.html|skill set]
You think that you can trust the government to look after your rights? ask an Indian
|
Post #72,375
1/3/03 9:15:50 PM
|
Well, personally..
..I'd been involved with UNIX for about 20 minutes, learning redirection, when I realized that everything in UNIX was a file - I was blown away when this hit me. I remember my co-worker root had this peculiar glow in his eye when I was freaking out about having "discovered" this :)
-drl
|
Post #72,384
1/3/03 10:00:52 PM
|
yup my intro to unix
was with a genius named tom giving me a 6 weeks course of hardcore nix and hardware. He was one of the builders of the original livermore supercomputer and has an awesome brain. thanx, bill
will work for cash and other incentives [link|http://home.tampabay.rr.com/boxley/resume/Resume.html|skill set]
You think that you can trust the government to look after your rights? ask an Indian
|