Probleme mit XFS oder LVM oder meinen HDDs?

08/11/2008 - 10:55 von Chris Cohen | Report spam
Hallo,

seit ca. 2 Tagen habe ich öfter Probleme mit meinem Daten LVM und XFS.
Gerade eben habe ich zum 3. mal einen Fehler wie:
[78443.896109] Filesystem "dm-0": Access to block zero in inode
1639912020 start_block: 0 start_off: 0 blkcnt: 0 extent-state: 0 lastx: 2f9
[78443.896910] Filesystem "dm-0": Access to block zero in inode
1639912020 start_block: 0 start_off: 0 blkcnt: 0 extent-state: 0 lastx: 2f9
[78443.897534] Filesystem "dm-0": Access to block zero in inode
1639912020 start_block: 0 start_off: 0 blkcnt: 0 extent-state: 0 lastx: 2f9
[78443.898148] Unable to handle kernel paging request at
ffff8800660cb00f RIP:
[78443.898157] [<ffffffff8033ca18>] memmove+0x28/0x40
[78443.898190] PGD 2648067 PUD 284a067 PMD 297b067 PTE 0
[78443.898214] Oops: 0002 [1] SMP
[78443.898231] CPU 1
[78443.898244] Modules linked in: tun ipv6 xfs xt_tcpudp xt_physdev
iptable_filter ip_tables x_tables ppdev parport_pc lp parport bridge ac
it87 hwmon_vid loop snd_hda_intel snd_pcm_oss snd_mixer_oss snd_pcm
snd_page_alloc snd_hwdep snd_seq_dummy snd_seq_oss usbhid hid
snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq snd_timer
snd_seq_device snd soundcore i2c_piix4 button pcspkr k8temp i2c_core
evdev shpchp pci_hotplug ext3 jbd mbcache ide_cd cdrom pata_acpi
ata_generic pata_atiixp sg sd_mod ohci1394 ieee1394 e1000 r8169 atiixp
ehci_hcd ide_core ohci_hcd ahci usbcore libata ssb scsi_mod raid10
raid456 async_xor async_memcpy async_tx xor raid0 multipath linear
dm_mirror dm_snapshot dm_mod thermal processor fan fuse raid1 md_mod
[78443.898607] Pid: 7355, comm: smbd Not tainted 2.6.24-21-xen #1
[78443.898619] RIP: e030:[<ffffffff8033ca18>] [<ffffffff8033ca18>]
memmove+0x28/0x40
[78443.898640] RSP: e02b:ffff88006f2bd400 EFLAGS: 00010296
[78443.898651] RAX: 00000000000000ff RBX: 0000000000001010 RCX:
ffff8800660cb010
[78443.898663] RDX: ffff8800660ca000 RSI: ffff88001ae5900f RDI:
ffff8800660ca000
[78443.898675] RBP: 0000000000000000 R08: 0000000000000000 R09:
ffff8800660ca000
[78443.898687] R10: ffff8800f0c027d0 R11: 0000000000000000 R12:
0000000000000001
[78443.898699] R13: ffff8800e936ced0 R14: ffff88001de91360 R15:
0000000000000001
[78443.898712] FS: 00007fcdc032d700(0000) GS:ffffffff805c7080(0000)
knlGS:0000000000000000
[78443.898729] CS: e033 DS: 0000 ES: 0000
[78443.898740] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[78443.898752] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[78443.898764] Process smbd (pid: 7355, threadinfo ffff88006f2bc000,
task ffff8800d08d6040)
[78443.898780] Stack: ffffffff88417cfc ffffffff884633b9
0000010100000005 ffff88001ae58000
[78443.898813] ffff8800312cd3c0 000000000000033f 00000000ffffffff
0000000000000340
[78443.898842] ffff88001de91360 0000000000000001 ffffffff884185f7
0000340000000010
[78443.898863] Call Trace:
[78443.898907] [<ffffffff88417cfc>]
:xfs:xfs_iext_add_indirect_multi+0xdc/0x220
[78443.898944] [<ffffffff884185f7>] :xfs:xfs_iext_add+0x1c7/0x250
[78443.898960] [<ffffffff80229f4d>] __dequeue_entity+0x3d/0x50
[78443.898991] [<ffffffff884189dc>] :xfs:xfs_iext_insert+0x1c/0x50
[78443.899020] [<ffffffff883f640b>]
:xfs:xfs_bmap_add_extent_hole_delay+0x42b/0x450
[78443.899064] [<ffffffff883f819e>] :xfs:xfs_bmap_add_extent+0x29e/0x440
[78443.899091] [<ffffffff88417145>] :xfs:xfs_iext_get_ext+0x55/0x70
[78443.899118] [<ffffffff883f406f>]
:xfs:xfs_bmap_search_multi_extents+0xaf/0x120
[78443.899143] [<ffffffff8022f6fc>] __cond_resched+0x1c/0x50
[78443.899172] [<ffffffff8842aa23>]
:xfs:xfs_icsb_modify_counters+0x73/0x1a0
[78443.899203] [<ffffffff883fb6e7>] :xfs:xfs_bmapi+0x657/0x12d0
[78443.899242] [<ffffffff88417100>] :xfs:xfs_iext_get_ext+0x10/0x70
[78443.899275] [<ffffffff883f41aa>] :xfs:xfs_bmap_search_extents+0xca/0x100
[78443.899316] [<ffffffff8841dcb7>] :xfs:xfs_iomap_write_delay+0x1e7/0x2e0
[78443.899370] [<ffffffff8841d51c>] :xfs:xfs_iomap+0x37c/0x390
[78443.899411] [<ffffffff8843b2a9>] :xfs:__xfs_get_blocks+0x79/0x200
[78443.899427] [<ffffffff802c2dd8>] alloc_buffer_head+0x58/0x70
[78443.899440] [<ffffffff802c3830>] alloc_page_buffers+0x60/0xe0
[78443.899459] [<ffffffff802c4d32>] __block_prepare_write+0x252/0x450
[78443.899489] [<ffffffff8843b450>] :xfs:xfs_get_blocks+0x0/0x10
[78443.899510] [<ffffffff80272ffc>] __grab_cache_page+0xcc/0x100
[78443.899528] [<ffffffff802c4fb4>] block_write_begin+0x54/0xe0
[78443.899558] [<ffffffff8843aca2>] :xfs:xfs_vm_write_begin+0x22/0x30
[78443.899586] [<ffffffff8843b450>] :xfs:xfs_get_blocks+0x0/0x10
[78443.899600] [<ffffffff80273f39>] generic_file_buffered_write+0x149/0x6e0
[78443.899646] [<ffffffff884436f6>] :xfs:xfs_write+0x676/0x910
[78443.899680] [<ffffffff8029d849>] do_sync_write+0xd9/0x120
[78443.899701] [<ffffffff8024cc40>] autoremove_wake_function+0x0/0x30
[78443.899717] [<ffffffff80470619>] mutex_lock+0x9/0x20
[78443.899731] [<ffffffff802ccbde>] inotify_inode_queue_event+0x7e/0x150
[78443.899746] [<ffffffff8020fad7>] local_clock+0x57/0xb0
[78443.899766] [<ffffffff8029e18d>] vfs_write+0xed/0x190
[78443.899781] [<ffffffff8029e9f4>] sys_pwrite64+0x84/0xa0
[78443.899797] [<ffffffff8020c698>] system_call+0x68/0x6d
[78443.899812] [<ffffffff8020c630>] system_call+0x0/0x6d
[78443.899828]
[78443.899837]
[78443.899837] Code: 88 41 ff 48 83 e9 01 48 39 d1 75 ec 48 89 f8 c3 e9
13 ff ff
[78443.899926] RIP [<ffffffff8033ca18>] memmove+0x28/0x40
[78443.899941] RSP <ffff88006f2bd400>
[78443.899951] CR2: ffff8800660cb00f
[78443.900454] [ end trace 55c9ef914910560d ]

erhalten. Beim letzten mal habe ich xfs_repair laufen lassen und es lief
wieder wie gewohnt. Die Partition ist ziemlich voll, könnte das ein
Problem sein?

Ich nutze Ubuntu Intrepid mit dem 2.6.24-21-xen kernel.

Viele Grüße
Chris
 

Lesen sie die antworten

#1 Sven Hartge
08/11/2008 - 15:05 | Warnen spam
Chris Cohen wrote:

seit ca. 2 Tagen habe ich öfter Probleme mit meinem Daten LVM und XFS.
Gerade eben habe ich zum 3. mal einen Fehler wie:



Làuft das LVM auf einem Linux-SoftRAID?

Früher gab es mal arge Probleme, wenn man XFS auf LVM auf MD gemacht
hat, da XFS gerne sehr sehr sehr viel Stack-Space verbraucht hat und es
bei einem solchen Layer-Stapel von Block-Geràten dann zu einem Überlauf
mit Dateisystemfehlern kam.

Allerdings soll dies gefixt sein.

Am besten wendest du dich direkt an die XFS-Liste bei SGI. (Adresse
gerade nicht parat.)



Sig lost. Core dumped.

Ähnliche fragen