[lugm.org] scan disk on linux system.

selven pcthegreat at gmail.com
Thu Jun 28 09:01:41 UTC 2012


your datacenter should normally be fine in helping you do that...

On Thu, Jun 28, 2012 at 5:00 AM, selven <pcthegreat at gmail.com> wrote:

> Hmm, maybe this is too late, but couldn't you have booted off a bootable
> disk and the mounted your partitions and then chrooted to that and then try
> to rebuild your initdrd using mkinitrd?
>
> +selven
>
>
> On Thu, Jun 28, 2012 at 4:56 AM, Sebastien <david20 at intnet.mu> wrote:
>
>> Hello,
>>
>> Thanks for your input. It was a bad initrd but there no way to boot off
>> the
>> OS. I had to reload at the end.
>>
>> Better to use VPS based kvm next time. At least I wll be able to boot on
>> knoppix or custom recovery when the OS crashed.
>>
>> Thanks
>>
>> Seb
>>
>> -----Original Message-----
>> From: discuss-bounces at discuss.lugm.org
>> [mailto:discuss-bounces at discuss.lugm.org] On Behalf Of Keshwarsingh Nadan
>> Sent: Wednesday, June 27, 2012 8:40 PM
>> To: 'LUGM Discuss Mailing List'
>> Subject: Re: [lugm.org] scan disk on linux system.
>>
>> Hello,
>>
>> Assuming the virtual hard drive isn't having a problem (and it may very
>> well
>> be) the issue you're seeing is a bad initrd, the kernel is almost
>> certainly
>> going to need a new initrd.
>>
>> You may be able to get into a repair shell using 'rdshell' kernel
>> argument,
>> but only if this is a newer version of CentOS (6 is when that came in? I
>> think?) If you can get into rdshell then you can manually mount the LVM
>> and
>> get a working root filesystem.
>>
>> Easiest way is if the 'rdshell' kernel option gives you a dracut repair
>> shell when it fails to mount the LVM. Try another boot and when Grub comes
>> up edit your kernel options to add that at the end, then proceed with
>> booting. When the mounting fails, you should drop into a repair shell
>> rather
>> than a kernel panic. If you get into dracut's repair shell, then your
>> google
>> fodder is 'manually mount LVM'. If you get the file system mounted
>> successfully you're golden, as at that point you can build a new initrd to
>> replace the failing, reboot, and you're done. (Google fodder mkinitrd)
>>
>> lvm vgscan -v
>> lvm vgchange -a y
>> lvm lvs -all
>> mount /dev/volumegroup/logicalvolume /mountpoint
>>
>> Second option would be to boot the VM from a LiveCD or DVD image,
>> preferably
>> one that is very close (in CentOS versions and kernel versions) to your VM
>> image's OS - then mount the filesystem, chroot to it and rebuild your
>> initrd. Google fodder for that process is 'bad initrd'.
>>
>> The last option (and not guaranteed to work first try) if you have another
>> virtual image of an operable VM. Go back to that operable VM and make a
>> new
>> initrd there. Then boot the non-operable VM using a LiveCD boot disk,
>> mount
>> the LVM drive, and copy the new initrd to the VM's /boot directory.
>>
>> The new initrd must incorporate the kernel modules needed to function on
>> the
>> non-operable VM, so it isn't as simple as just running mkinitrd on the
>> operable VM - you first need to make certain all the modules needed for
>> the
>> virtual hardware are in your non-operable VM's config or you'll run into
>> the
>> exact same problem you're having now.
>>
>> Regards,
>> kn
>>
>> -----Original Message-----
>> From: discuss-bounces at discuss.lugm.org
>> [mailto:discuss-bounces at discuss.lugm.org] On Behalf Of Ajay R Ramjatan
>> Sent: Wednesday, June 27, 2012 6:14 PM
>> To: LUGM Discuss Mailing List
>> Subject: Re: [lugm.org] scan disk on linux system.
>>
>> Hi there,
>>
>> Sorry to hear about your distress. If you have an IP-KVM, please use this
>> to
>> boot with a boot-cd (ask your datacenter for help) and rescue your system.
>> If not, ask the datacenter to do it for you and give them precise
>> instructions on what you want done. E.g., do you want a rescue or a
>> completely new install? Do you want to preserve data on certain devices
>> such
>> as /home?
>>
>> Now is the time you will be testing how good your backup mechanism is
>> assuming something ReallyBad(TM) happened. I wish you can solve your
>> problem
>> with just a simple reboot and fsck with the help of your datacenter tech
>> guys.
>>
>> Let us know how it works out!
>>
>> On Wed, Jun 27, 2012 at 5:12 PM, Sebastien <david20 at intnet.mu> wrote:
>> >
>> > Of course the lazy option is to reload. . .I am just curious on this
>> one.
>> >
>> > -----Original Message-----
>> > From: discuss-bounces at discuss.lugm.org
>> > [mailto:discuss-bounces at discuss.lugm.org] On Behalf Of Chris Wilson
>> > Sent: Wednesday, June 27, 2012 4:01 PM
>> > To: LUGM Discuss Mailing List
>> > Subject: Re: [lugm.org] scan disk on linux system.
>> >
>> > Hi Sebastien,
>> >
>> > On Wed, 27 Jun 2012, Sebastien wrote:
>> >
>> >> I was working on a remote system installing cpanel and stuff like
>> >> that. I run scandisk while the file system was mounted.
>> >>
>> >> Now the system is not able to boot. Getting kernel panic at boot.
>> >>
>> >> I am a bit lazy to reconfigured the server again. I am using Xen and
>> > centos.
>> >> Any ideas what can be done?
>> >
>> > You're a bit lazy in providing details about the problem too. Maybe if
>> > you imagine the problem really hard, we might be able to psychically
>> > determine what it is and fix it for you using telekinesis.
>> >
>> > If you're as lazy as you say, then reinstalling will be easier than
>> > recovering the server, which sounds like it might be seriously hard
>> > work, depending on how many "errors" you "fixed" in fsck. By the way,
>> > if you really ran scandisk and not fsck against the filesystem, you
>> > might as well give up now.
>> >
>> > If you're as lazy as you say, you might find that having good backups
>> > helps you to get away with it.
>> >
>> > Cheers, Chris.
>> > --
>> > Aptivate | http://www.aptivate.org | Phone: +44 1223 967 838 Future
>> > Business, Cam City FC, Milton Rd, Cambridge, CB4 1UY, UK
>> >
>> > Aptivate is a not-for-profit company registered in England and Wales
>> > with company number 04980791.
>> >
>> >
>> > __________________________________________________________
>> > Linux User Group of Mauritius (LUGM) Discuss mailing list
>> > Website: http://lugm.org
>> > Mailing list archive:
>> > http://discuss.lugm.org/pipermail/discuss_lugm.org/
>> > Forum: http://lugm.org/forum/
>> > IRC: #linux.mu on Freenode
>> >
>> >
>> > __________________________________________________________
>> > Linux User Group of Mauritius (LUGM) Discuss mailing list
>> > Website: http://lugm.org
>> > Mailing list archive:
>> > http://discuss.lugm.org/pipermail/discuss_lugm.org/
>> > Forum: http://lugm.org/forum/
>> > IRC: #linux.mu on Freenode
>>
>> __________________________________________________________
>> Linux User Group of Mauritius (LUGM) Discuss mailing list
>> Website: http://lugm.org
>> Mailing list archive: http://discuss.lugm.org/pipermail/discuss_lugm.org/
>> Forum: http://lugm.org/forum/
>> IRC: #linux.mu on Freenode
>>
>>
>> __________________________________________________________
>> Linux User Group of Mauritius (LUGM) Discuss mailing list
>> Website: http://lugm.org
>> Mailing list archive: http://discuss.lugm.org/pipermail/discuss_lugm.org/
>> Forum: http://lugm.org/forum/
>> IRC: #linux.mu on Freenode
>>
>
>
>
> --
> *Pirabarlen Cheenaramen *| $3|v3n* *
> L'escalier
>
> mobile: +230 49 24 918
> email: pcthegreat at gmail.com || god at hackers.mu
> contact: http://godifiy.me
> /*memory is like prison*/
> (user==selven)?free(user):user=malloc(sizeof(brain));
> P Save electricity & disk space. Cat this mail to >/dev/null 2>&1 after
> use.
>
>


-- 
*Pirabarlen Cheenaramen *| $3|v3n* *
L'escalier

mobile: +230 49 24 918
email: pcthegreat at gmail.com || god at hackers.mu
contact: http://godifiy.me
/*memory is like prison*/
(user==selven)?free(user):user=malloc(sizeof(brain));
P Save electricity & disk space. Cat this mail to >/dev/null 2>&1 after use.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://discuss.lugm.org/pipermail/discuss_discuss.lugm.org/attachments/20120628/22067116/attachment.html>


More information about the Discuss mailing list