提交 5fc51746 编写于 作者: V Venkatesh Pallipadi 提交者: H. Peter Anvin

x86, pat: Keep identity maps consistent with mmaps even when pat_disabled

Make reserve_memtype internally take care of pat disabled case and fallback
to default return values.

Remove the specific pat_disabled checks in track_* routines.

Change kernel_map_sync_memtype to sync identity map even when
pat_disabled.

This change ensures that, even for pat_disabled case, we take care of
keeping identity map in sync. Before this patch, in pat disabled case,
ioremap() keeps the identity maps in sync and other APIs like pci and
/dev/mem mmap don't, which is not a very consistent behavior.
Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
上级 5400743d
...@@ -339,6 +339,8 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type, ...@@ -339,6 +339,8 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
if (new_type) { if (new_type) {
if (req_type == -1) if (req_type == -1)
*new_type = _PAGE_CACHE_WB; *new_type = _PAGE_CACHE_WB;
else if (req_type == _PAGE_CACHE_WC)
*new_type = _PAGE_CACHE_UC_MINUS;
else else
*new_type = req_type & _PAGE_CACHE_MASK; *new_type = req_type & _PAGE_CACHE_MASK;
} }
...@@ -577,7 +579,7 @@ int kernel_map_sync_memtype(u64 base, unsigned long size, unsigned long flags) ...@@ -577,7 +579,7 @@ int kernel_map_sync_memtype(u64 base, unsigned long size, unsigned long flags)
{ {
unsigned long id_sz; unsigned long id_sz;
if (!pat_enabled || base >= __pa(high_memory)) if (base >= __pa(high_memory))
return 0; return 0;
id_sz = (__pa(high_memory) < base + size) ? id_sz = (__pa(high_memory) < base + size) ?
...@@ -677,9 +679,6 @@ int track_pfn_vma_copy(struct vm_area_struct *vma) ...@@ -677,9 +679,6 @@ int track_pfn_vma_copy(struct vm_area_struct *vma)
unsigned long vma_size = vma->vm_end - vma->vm_start; unsigned long vma_size = vma->vm_end - vma->vm_start;
pgprot_t pgprot; pgprot_t pgprot;
if (!pat_enabled)
return 0;
/* /*
* For now, only handle remap_pfn_range() vmas where * For now, only handle remap_pfn_range() vmas where
* is_linear_pfn_mapping() == TRUE. Handling of * is_linear_pfn_mapping() == TRUE. Handling of
...@@ -715,9 +714,6 @@ int track_pfn_vma_new(struct vm_area_struct *vma, pgprot_t *prot, ...@@ -715,9 +714,6 @@ int track_pfn_vma_new(struct vm_area_struct *vma, pgprot_t *prot,
resource_size_t paddr; resource_size_t paddr;
unsigned long vma_size = vma->vm_end - vma->vm_start; unsigned long vma_size = vma->vm_end - vma->vm_start;
if (!pat_enabled)
return 0;
/* /*
* For now, only handle remap_pfn_range() vmas where * For now, only handle remap_pfn_range() vmas where
* is_linear_pfn_mapping() == TRUE. Handling of * is_linear_pfn_mapping() == TRUE. Handling of
...@@ -743,9 +739,6 @@ void untrack_pfn_vma(struct vm_area_struct *vma, unsigned long pfn, ...@@ -743,9 +739,6 @@ void untrack_pfn_vma(struct vm_area_struct *vma, unsigned long pfn,
resource_size_t paddr; resource_size_t paddr;
unsigned long vma_size = vma->vm_end - vma->vm_start; unsigned long vma_size = vma->vm_end - vma->vm_start;
if (!pat_enabled)
return;
/* /*
* For now, only handle remap_pfn_range() vmas where * For now, only handle remap_pfn_range() vmas where
* is_linear_pfn_mapping() == TRUE. Handling of * is_linear_pfn_mapping() == TRUE. Handling of
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册