I was concerned that the machine I would be using to prepare the USB stick is still running Fedora 14. The turned out to be not a problem at all. I have the iso image for the Fedora 16 install DVD, and there are instructions for putting this onto a USB stick. I do the following:
yum install livecd-toolsAnd this fetches the following 8 packages:
================================================================================ Package Arch Version Repository Size ================================================================================ Installing: livecd-tools x86_64 1:14.2-1.fc14 updates 51 k Installing for dependencies: lzo x86_64 2.03-3.fc12 fedora 53 k pykickstart noarch 1.77-2.fc14 fedora 268 k pyparted x86_64 3.4-5.fc14 updates 185 k python-imgcreate x86_64 1:14.2-1.fc14 updates 85 k squashfs-tools x86_64 4.1-2.fc14 updates 110 k syslinux x86_64 4.02-3.fc14 fedora 855 k syslinux-extlinux x86_64 4.02-3.fc14 fedora 337 kI already have mkdosfs on my system, which is required in some recipes for making bootable USB sticks.
So after getting the packages, I just do this with a fresh out of the box flash stick:
[root@trona Data]# livecd-iso-to-disk Fedora-16-x86_64-DVD.iso /dev/sdb1 Verifying image... /u1/tom/cd_archive/Data/Fedora-16-x86_64-DVD.iso: ed43ccbd8744dc4ea987bf5152ebc2ce Fragment sums: 5484d54b34f24e13639bc6a7afbebed5a4c1de97ff4754a6f8dc5fd1ecd1 Fragment count: 20 Press [Esc] to abort check. Checking: 100.0% The media check is complete, the result is: PASS. It is OK to use this media. /u1/tom/cd_archive/Data/Fedora-16-x86_64-DVD.iso uses initrd.img w/o install.img Size of DVD image: 3584 Size of isolinux/initrd.img: 130 Available space: 7498 Copying DVD image to USB stick Updating boot config file Installing boot loader USB stick set up as live image!
And this works! This booted up fine in just minutes on my ASUS P8Z68-V system with essentially no hassle.
Getting the Intel S1200BTL system to boot from a USB drive is apparently impossible, or at least is a research project involving EFI shell shenanigans, and I abandon the effort.
This is par for the course. I always have the least trouble with products from Taiwan.
This product ships with a CD containing drivers and some documentation, but ultimately even this disk directs you to the Intel website:
There is a hint under "Intel Deployment Assistant" in the following line:For help in using the EFI environment, including mounting a USB drive, see the EFI Toolkit located on the support web page.EFI by the way stands for "Extensible Firmware Interface", which is the moral equivalent of the BIOS firmware. It is sometimes referred to as the "pre boot environment". I guess the word BIOS is now old school and the world is groping for new terms. Apparently in 2005, Intel decided the old BIOS standards (16 bit and all) had gotten too crusty and needed a complete overhaul. And the original EFI has been superceded by UEFI. My ASUS motherboard calls their firmware a UEFI BIOS, which I think is a good choice of terms.
I have had no luck finding the "EFI toolkit" mentioned on their CD, but I have gotten the clue that I need to interact with the EFI shell to boot from USB. What I found the boot order to be on this machine was:
I find it difficult to get into the BIOS. Holding down F2 (no just pressing it off and on) seems to yield the results I want. F6 gets into a boot menu rather than the whole setup interface. Holding down F6 seems to be the thing to do.
This document is useful:
Once in the EFI shell, it is possible to examine and even boot from removable media (like USB flash drives), but they must be formatted as FAT16 or FAT32. (As mentioned, the EFI shell is windows centric).map -rwill refresh mounting and mapping, which allows devices to be plugged in at any time and accessed. The first USB device found will show up as "fs0". To "change" to that device, type:
fs0:
Welcome to GRUB! Error: no such device: 4d02eb5d-6780-42ca-aec6-4d2f8886a0f5 grub rescue>Grub rescue has no help that I have been able to find, andthe bottom line is that the system won't boot. The long UUID turns out to be the UUID of the software raid root partition. My guess is that grub2 is not setup with raid drivers, but who really knows? I try doing insmod "raid" in the /boot/grub/grub.cfg file, but this yields no benefit. I finally decide to just reinstall and abandon the idea of mirroring my root partition.
cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda3[0] 1850582904 blocks super 1.2 [2/1] [U_] bitmap: 12/14 pages [48KB], 65536KB chunk unused devices:
Then I power the system down, install the second drive, and boot back up. It comes up with just the first drive active in the raid array, but tells me:
mdadm --monitor /dev/md0 mdadm: Monitor using email address "root" from config file mdadm: Warning: One autorebuild process already running.This makes is appear that it has discovered that the second disk has reappeared and is busy rebuilding the array, but this is not so.
More useful is
mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Apr 11 10:29:35 2012 Raid Level : raid1 Array Size : 1850582904 (1764.85 GiB 1895.00 GB) Used Dev Size : 1850582904 (1764.85 GiB 1895.00 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Tue Apr 17 19:57:26 2012 State : active, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : zion.door.com:0 UUID : aa5280a9:a5a8effb:d500b917:1119103c Events : 14758 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 0 0 1 removed
Also of interest is:
cat /etc/mdadm.conf # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md0 level=raid1 num-devices=2 UUID=aa5280a9:a5a8effb:d500b917:1119103c blkid /dev/sda2: UUID="c431f446-76cb-47ac-ac8e-f9804f79eff8" SEC_TYPE="ext2" TYPE="ext3" /dev/sda3: UUID="aa5280a9-a5a8-effb-d500-b9171119103c" UUID_SUB="e87f2840-a7d5-a5fd-169c-6ed28f4425bf" LABEL="zion.door.com:0" TYPE="linux_raid_member" /dev/sda5: UUID="193396c8-fec5-4f93-810c-994cce2f45fb" TYPE="ext4" /dev/sda4: UUID="48396bbf-1668-471e-a525-13dc8c237457" TYPE="swap" /dev/md0: UUID="7f21fdb7-3351-4fd2-b71a-99b5b452a995" TYPE="ext4" /dev/sdb2: UUID="aa5280a9-a5a8-effb-d500-b9171119103c" UUID_SUB="e3d5d1c4-517f-c46a-2ee9-f7af6256ff02" LABEL="zion.door.com:0" TYPE="linux_raid_member" /dev/sdb3: UUID="83fdd722-eb9f-4a5d-81dd-b8f210ab8d7a" SEC_TYPE="ext2" TYPE="ext3" /dev/sdb4: UUID="282e31aa-c2a4-4ef5-b0f7-f42ca12aa538" TYPE="swap" /dev/sdb5: UUID="32243040-528b-4fe3-ab82-83ddaf0e6d8b" TYPE="ext4"
To add the disk that was removed, I need to issue the command:
mdadm --manage /dev/md0 --add /dev/sdb2 mdadm: re-added /dev/sdb2 [root@bethel tom]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda3[0] 1850582904 blocks super 1.2 [2/1] [U_] [====>................] recovery = 24.6% (457009472/1850582904) finish=203.5min speed=114105K/sec bitmap: 11/14 pages [44KB], 65536KB chunkSo, even though there is essentially no data in this partition, it is going to take over 3 hours to absorb it back into the raid array. I don't want to do this yet, so I do this:
[root@bethel tom]# mdadm --manage /dev/md0 --remove /dev/sdb2 mdadm: hot remove failed for /dev/sdb2: Device or resource busy [root@bethel tom]# mdadm --manage /dev/md0 --fail /dev/sdb2 mdadm: set /dev/sdb2 faulty in /dev/md0 [root@bethel tom]# mdadm --manage /dev/md0 --remove /dev/sdb2 mdadm: hot removed /dev/sdb2 from /dev/md0Note that it is not proper to just "remove" a drive, you must fail it first, at least in an active drive you need to.
Adventures in Computing / tom@mmto.org