[dm-devel] HPA unlock during partition scan of RAID components

Tejun Heo tj at kernel.org
Thu Nov 3 15:54:16 UTC 2011


Hello,

(cc'ing dm-devel)

On Thu, Nov 03, 2011 at 01:02:37AM +0000, Hawrylewicz Czarnowski, Przemyslaw wrote:
> Recently we have encountered a problem with unlocking of HPA on disks belonging to RAID array.
> The scenario in the simplest form is as follows:
> * take HPA resized drives
> * use SW Raid solution with metadata located at the end of disk (eg. md v1.0, IMSM)
> * create raid0/10/5 using whole devices (eg. /dev/sda) (raid size must extend capacity of single device; raid1 is not the case here) 
> * create any partition with size exceeding HPA limit of single component
> 
> According to the code of rescan_partitions(), if disk has capacity limit enabled (HPA) and boundary of partition extends beyond that limit, it is bypassed/unlocked (regardless of libata.ignore_hpa state).
> As for single block device it is proper action - for raid using whole block device it is not. The partition table despite the fact it is read from single disk belongs to raid array. Values in RAID context are set properly. But from the point of view of single device - they are not.
> Problem is that in general rescan_partition() has no knowledge of any raid array using that block device. Moreover, that raid is not assembled yet.
> And the result: as HPA limit is unlocked and metadata on this device in not recognizable anymore - in worst case raid is not assembled or failed.
> 
> The simplest resolution is to conditionally forbid HPA unlocking (eg. extending ignore_hpa parameter) but I suppose a better and more 'intelligent' solution can be found.

This has come up a couple times and I think the proper solution is to
always unlock HPA and expose both sizes - locked and unlocked and let
dm, md or whatever do whatever is approriate.  Block or driver layer
doesn't have any way to determine which one is the right bet - it
simply doesn't have enough information.  I tried to bounce this idea
off people who whould actually be using this (dm/md) but haven't heard
back yet.

Thanks.

-- 
tejun




More information about the dm-devel mailing list