提交 73154383 编写于 作者: L Linus Torvalds

Merge branch 'akpm' (incoming from Andrew)

Merge first batch of fixes from Andrew Morton:

 - A couple of kthread changes

 - A few minor audit patches

 - A number of fbdev patches.  Florian remains AWOL so I'm picking up
   some of these.

 - A few kbuild things

 - ocfs2 updates

 - Almost all of the MM queue

(And in the meantime, I already have the second big batch from Andrew
pending in my mailbox ;^)

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (149 commits)
  memcg: take reference before releasing rcu_read_lock
  mem hotunplug: fix kfree() of bootmem memory
  mmKconfig: add an option to disable bounce
  mm, nobootmem: do memset() after memblock_reserve()
  mm, nobootmem: clean-up of free_low_memory_core_early()
  fs/buffer.c: remove unnecessary init operation after allocating buffer_head.
  numa, cpu hotplug: change links of CPU and node when changing node number by onlining CPU
  mm: fix memory_hotplug.c printk format warning
  mm: swap: mark swap pages writeback before queueing for direct IO
  swap: redirty page if page write fails on swap file
  mm, memcg: give exiting processes access to memory reserves
  thp: fix huge zero page logic for page with pfn == 0
  memcg: avoid accessing memcg after releasing reference
  fs: fix fsync() error reporting
  memblock: fix missing comment of memblock_insert_region()
  mm: Remove unused parameter of pages_correctly_reserved()
  firmware, memmap: fix firmware_map_entry leak
  mm/vmstat: add note on safety of drain_zonestat
  mm: thp: add split tail pages to shrink page list in page reclaim
  mm: allow for outstanding swap writeback accounting
  ...
...@@ -40,6 +40,7 @@ Features: ...@@ -40,6 +40,7 @@ Features:
- soft limit - soft limit
- moving (recharging) account at moving a task is selectable. - moving (recharging) account at moving a task is selectable.
- usage threshold notifier - usage threshold notifier
- memory pressure notifier
- oom-killer disable knob and oom-notifier - oom-killer disable knob and oom-notifier
- Root cgroup has no limit controls. - Root cgroup has no limit controls.
...@@ -65,6 +66,7 @@ Brief summary of control files. ...@@ -65,6 +66,7 @@ Brief summary of control files.
memory.stat # show various statistics memory.stat # show various statistics
memory.use_hierarchy # set/show hierarchical account enabled memory.use_hierarchy # set/show hierarchical account enabled
memory.force_empty # trigger forced move charge to parent memory.force_empty # trigger forced move charge to parent
memory.pressure_level # set memory pressure notifications
memory.swappiness # set/show swappiness parameter of vmscan memory.swappiness # set/show swappiness parameter of vmscan
(See sysctl's vm.swappiness) (See sysctl's vm.swappiness)
memory.move_charge_at_immigrate # set/show controls of moving charges memory.move_charge_at_immigrate # set/show controls of moving charges
...@@ -762,7 +764,73 @@ At reading, current status of OOM is shown. ...@@ -762,7 +764,73 @@ At reading, current status of OOM is shown.
under_oom 0 or 1 (if 1, the memory cgroup is under OOM, tasks may under_oom 0 or 1 (if 1, the memory cgroup is under OOM, tasks may
be stopped.) be stopped.)
11. TODO 11. Memory Pressure
The pressure level notifications can be used to monitor the memory
allocation cost; based on the pressure, applications can implement
different strategies of managing their memory resources. The pressure
levels are defined as following:
The "low" level means that the system is reclaiming memory for new
allocations. Monitoring this reclaiming activity might be useful for
maintaining cache level. Upon notification, the program (typically
"Activity Manager") might analyze vmstat and act in advance (i.e.
prematurely shutdown unimportant services).
The "medium" level means that the system is experiencing medium memory
pressure, the system might be making swap, paging out active file caches,
etc. Upon this event applications may decide to further analyze
vmstat/zoneinfo/memcg or internal memory usage statistics and free any
resources that can be easily reconstructed or re-read from a disk.
The "critical" level means that the system is actively thrashing, it is
about to out of memory (OOM) or even the in-kernel OOM killer is on its
way to trigger. Applications should do whatever they can to help the
system. It might be too late to consult with vmstat or any other
statistics, so it's advisable to take an immediate action.
The events are propagated upward until the event is handled, i.e. the
events are not pass-through. Here is what this means: for example you have
three cgroups: A->B->C. Now you set up an event listener on cgroups A, B
and C, and suppose group C experiences some pressure. In this situation,
only group C will receive the notification, i.e. groups A and B will not
receive it. This is done to avoid excessive "broadcasting" of messages,
which disturbs the system and which is especially bad if we are low on
memory or thrashing. So, organize the cgroups wisely, or propagate the
events manually (or, ask us to implement the pass-through events,
explaining why would you need them.)
The file memory.pressure_level is only used to setup an eventfd. To
register a notification, an application must:
- create an eventfd using eventfd(2);
- open memory.pressure_level;
- write string like "<event_fd> <fd of memory.pressure_level> <level>"
to cgroup.event_control.
Application will be notified through eventfd when memory pressure is at
the specific level (or higher). Read/write operations to
memory.pressure_level are no implemented.
Test:
Here is a small script example that makes a new cgroup, sets up a
memory limit, sets up a notification in the cgroup and then makes child
cgroup experience a critical pressure:
# cd /sys/fs/cgroup/memory/
# mkdir foo
# cd foo
# cgroup_event_listener memory.pressure_level low &
# echo 8000000 > memory.limit_in_bytes
# echo 8000000 > memory.memsw.limit_in_bytes
# echo $$ > tasks
# dd if=/dev/zero | read x
(Expect a bunch of notifications, and eventually, the oom-killer will
trigger.)
12. TODO
1. Add support for accounting huge pages (as a separate controller) 1. Add support for accounting huge pages (as a separate controller)
2. Make per-cgroup scanner reclaim not-shared pages first 2. Make per-cgroup scanner reclaim not-shared pages first
......
...@@ -18,6 +18,7 @@ files can be found in mm/swap.c. ...@@ -18,6 +18,7 @@ files can be found in mm/swap.c.
Currently, these files are in /proc/sys/vm: Currently, these files are in /proc/sys/vm:
- admin_reserve_kbytes
- block_dump - block_dump
- compact_memory - compact_memory
- dirty_background_bytes - dirty_background_bytes
...@@ -53,11 +54,41 @@ Currently, these files are in /proc/sys/vm: ...@@ -53,11 +54,41 @@ Currently, these files are in /proc/sys/vm:
- percpu_pagelist_fraction - percpu_pagelist_fraction
- stat_interval - stat_interval
- swappiness - swappiness
- user_reserve_kbytes
- vfs_cache_pressure - vfs_cache_pressure
- zone_reclaim_mode - zone_reclaim_mode
============================================================== ==============================================================
admin_reserve_kbytes
The amount of free memory in the system that should be reserved for users
with the capability cap_sys_admin.
admin_reserve_kbytes defaults to min(3% of free pages, 8MB)
That should provide enough for the admin to log in and kill a process,
if necessary, under the default overcommit 'guess' mode.
Systems running under overcommit 'never' should increase this to account
for the full Virtual Memory Size of programs used to recover. Otherwise,
root may not be able to log in to recover the system.
How do you calculate a minimum useful reserve?
sshd or login + bash (or some other shell) + top (or ps, kill, etc.)
For overcommit 'guess', we can sum resident set sizes (RSS).
On x86_64 this is about 8MB.
For overcommit 'never', we can take the max of their virtual sizes (VSZ)
and add the sum of their RSS.
On x86_64 this is about 128MB.
Changing this takes effect whenever an application requests memory.
==============================================================
block_dump block_dump
block_dump enables block I/O debugging when set to a nonzero value. More block_dump enables block I/O debugging when set to a nonzero value. More
...@@ -542,6 +573,7 @@ memory until it actually runs out. ...@@ -542,6 +573,7 @@ memory until it actually runs out.
When this flag is 2, the kernel uses a "never overcommit" When this flag is 2, the kernel uses a "never overcommit"
policy that attempts to prevent any overcommit of memory. policy that attempts to prevent any overcommit of memory.
Note that user_reserve_kbytes affects this policy.
This feature can be very useful because there are a lot of This feature can be very useful because there are a lot of
programs that malloc() huge amounts of memory "just-in-case" programs that malloc() huge amounts of memory "just-in-case"
...@@ -645,6 +677,24 @@ The default value is 60. ...@@ -645,6 +677,24 @@ The default value is 60.
============================================================== ==============================================================
- user_reserve_kbytes
When overcommit_memory is set to 2, "never overommit" mode, reserve
min(3% of current process size, user_reserve_kbytes) of free memory.
This is intended to prevent a user from starting a single memory hogging
process, such that they cannot recover (kill the hog).
user_reserve_kbytes defaults to min(3% of the current process size, 128MB).
If this is reduced to zero, then the user will be allowed to allocate
all free memory with a single process, minus admin_reserve_kbytes.
Any subsequent attempts to execute a command will result in
"fork: Cannot allocate memory".
Changing this takes effect whenever an application requests memory.
==============================================================
vfs_cache_pressure vfs_cache_pressure
------------------ ------------------
......
...@@ -8,7 +8,9 @@ The Linux kernel supports the following overcommit handling modes ...@@ -8,7 +8,9 @@ The Linux kernel supports the following overcommit handling modes
default. default.
1 - Always overcommit. Appropriate for some scientific 1 - Always overcommit. Appropriate for some scientific
applications. applications. Classic example is code using sparse arrays
and just relying on the virtual memory consisting almost
entirely of zero pages.
2 - Don't overcommit. The total address space commit 2 - Don't overcommit. The total address space commit
for the system is not permitted to exceed swap + a for the system is not permitted to exceed swap + a
...@@ -18,6 +20,10 @@ The Linux kernel supports the following overcommit handling modes ...@@ -18,6 +20,10 @@ The Linux kernel supports the following overcommit handling modes
pages but will receive errors on memory allocation as pages but will receive errors on memory allocation as
appropriate. appropriate.
Useful for applications that want to guarantee their
memory allocations will be available in the future
without having to initialize every page.
The overcommit policy is set via the sysctl `vm.overcommit_memory'. The overcommit policy is set via the sysctl `vm.overcommit_memory'.
The overcommit percentage is set via `vm.overcommit_ratio'. The overcommit percentage is set via `vm.overcommit_ratio'.
......
...@@ -185,7 +185,6 @@ nautilus_machine_check(unsigned long vector, unsigned long la_ptr) ...@@ -185,7 +185,6 @@ nautilus_machine_check(unsigned long vector, unsigned long la_ptr)
mb(); mb();
} }
extern void free_reserved_mem(void *, void *);
extern void pcibios_claim_one_bus(struct pci_bus *); extern void pcibios_claim_one_bus(struct pci_bus *);
static struct resource irongate_io = { static struct resource irongate_io = {
...@@ -239,8 +238,8 @@ nautilus_init_pci(void) ...@@ -239,8 +238,8 @@ nautilus_init_pci(void)
if (pci_mem < memtop) if (pci_mem < memtop)
memtop = pci_mem; memtop = pci_mem;
if (memtop > alpha_mv.min_mem_address) { if (memtop > alpha_mv.min_mem_address) {
free_reserved_mem(__va(alpha_mv.min_mem_address), free_reserved_area((unsigned long)__va(alpha_mv.min_mem_address),
__va(memtop)); (unsigned long)__va(memtop), 0, NULL);
printk("nautilus_init_pci: %ldk freed\n", printk("nautilus_init_pci: %ldk freed\n",
(memtop - alpha_mv.min_mem_address) >> 10); (memtop - alpha_mv.min_mem_address) >> 10);
} }
......
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <asm/console.h> #include <asm/console.h>
#include <asm/tlb.h> #include <asm/tlb.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/sections.h>
extern void die_if_kernel(char *,struct pt_regs *,long); extern void die_if_kernel(char *,struct pt_regs *,long);
...@@ -281,8 +282,6 @@ printk_memory_info(void) ...@@ -281,8 +282,6 @@ printk_memory_info(void)
{ {
unsigned long codesize, reservedpages, datasize, initsize, tmp; unsigned long codesize, reservedpages, datasize, initsize, tmp;
extern int page_is_ram(unsigned long) __init; extern int page_is_ram(unsigned long) __init;
extern char _text, _etext, _data, _edata;
extern char __init_begin, __init_end;
/* printk all informations */ /* printk all informations */
reservedpages = 0; reservedpages = 0;
...@@ -317,33 +316,16 @@ mem_init(void) ...@@ -317,33 +316,16 @@ mem_init(void)
} }
#endif /* CONFIG_DISCONTIGMEM */ #endif /* CONFIG_DISCONTIGMEM */
void
free_reserved_mem(void *start, void *end)
{
void *__start = start;
for (; __start < end; __start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(__start));
init_page_count(virt_to_page(__start));
free_page((long)__start);
totalram_pages++;
}
}
void void
free_initmem(void) free_initmem(void)
{ {
extern char __init_begin, __init_end; free_initmem_default(0);
free_reserved_mem(&__init_begin, &__init_end);
printk ("Freeing unused kernel memory: %ldk freed\n",
(&__init_end - &__init_begin) >> 10);
} }
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void void
free_initrd_mem(unsigned long start, unsigned long end) free_initrd_mem(unsigned long start, unsigned long end)
{ {
free_reserved_mem((void *)start, (void *)end); free_reserved_area(start, end, 0, "initrd");
printk ("Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
} }
#endif #endif
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/sections.h>
pg_data_t node_data[MAX_NUMNODES]; pg_data_t node_data[MAX_NUMNODES];
EXPORT_SYMBOL(node_data); EXPORT_SYMBOL(node_data);
...@@ -325,8 +326,6 @@ void __init mem_init(void) ...@@ -325,8 +326,6 @@ void __init mem_init(void)
{ {
unsigned long codesize, reservedpages, datasize, initsize, pfn; unsigned long codesize, reservedpages, datasize, initsize, pfn;
extern int page_is_ram(unsigned long) __init; extern int page_is_ram(unsigned long) __init;
extern char _text, _etext, _data, _edata;
extern char __init_begin, __init_end;
unsigned long nid, i; unsigned long nid, i;
high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT); high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
......
...@@ -144,37 +144,18 @@ void __init mem_init(void) ...@@ -144,37 +144,18 @@ void __init mem_init(void)
PAGES_TO_KB(reserved_pages)); PAGES_TO_KB(reserved_pages));
} }
static void __init free_init_pages(const char *what, unsigned long begin,
unsigned long end)
{
unsigned long addr;
pr_info("Freeing %s: %ldk [%lx] to [%lx]\n",
what, TO_KB(end - begin), begin, end);
/* need to check that the page we free is not a partial page */
for (addr = begin; addr + PAGE_SIZE <= end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
}
/* /*
* free_initmem: Free all the __init memory. * free_initmem: Free all the __init memory.
*/ */
void __init_refok free_initmem(void) void __init_refok free_initmem(void)
{ {
free_init_pages("unused kernel memory", free_initmem_default(0);
(unsigned long)__init_begin,
(unsigned long)__init_end);
} }
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end) void __init free_initrd_mem(unsigned long start, unsigned long end)
{ {
free_init_pages("initrd memory", start, end); free_reserved_area(start, end, 0, "initrd");
} }
#endif #endif
......
...@@ -60,6 +60,15 @@ extern void __pgd_error(const char *file, int line, pgd_t); ...@@ -60,6 +60,15 @@ extern void __pgd_error(const char *file, int line, pgd_t);
*/ */
#define FIRST_USER_ADDRESS PAGE_SIZE #define FIRST_USER_ADDRESS PAGE_SIZE
/*
* Use TASK_SIZE as the ceiling argument for free_pgtables() and
* free_pgd_range() to avoid freeing the modules pmd when LPAE is enabled (pmd
* page shared between user and kernel).
*/
#ifdef CONFIG_ARM_LPAE
#define USER_PGTABLES_CEILING TASK_SIZE
#endif
/* /*
* The pgprot_* and protection_map entries will be fixed up in runtime * The pgprot_* and protection_map entries will be fixed up in runtime
* to include the cachable and bufferable bits based on memory policy, * to include the cachable and bufferable bits based on memory policy,
......
...@@ -99,6 +99,9 @@ void show_mem(unsigned int filter) ...@@ -99,6 +99,9 @@ void show_mem(unsigned int filter)
printk("Mem-info:\n"); printk("Mem-info:\n");
show_free_areas(filter); show_free_areas(filter);
if (filter & SHOW_MEM_FILTER_PAGE_COUNT)
return;
for_each_bank (i, mi) { for_each_bank (i, mi) {
struct membank *bank = &mi->bank[i]; struct membank *bank = &mi->bank[i];
unsigned int pfn1, pfn2; unsigned int pfn1, pfn2;
...@@ -424,24 +427,6 @@ void __init bootmem_init(void) ...@@ -424,24 +427,6 @@ void __init bootmem_init(void)
max_pfn = max_high - PHYS_PFN_OFFSET; max_pfn = max_high - PHYS_PFN_OFFSET;
} }
static inline int free_area(unsigned long pfn, unsigned long end, char *s)
{
unsigned int pages = 0, size = (end - pfn) << (PAGE_SHIFT - 10);
for (; pfn < end; pfn++) {
struct page *page = pfn_to_page(pfn);
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
pages++;
}
if (size && s)
printk(KERN_INFO "Freeing %s memory: %dK\n", s, size);
return pages;
}
/* /*
* Poison init memory with an undefined instruction (ARM) or a branch to an * Poison init memory with an undefined instruction (ARM) or a branch to an
* undefined instruction (Thumb). * undefined instruction (Thumb).
...@@ -534,6 +519,14 @@ static void __init free_unused_memmap(struct meminfo *mi) ...@@ -534,6 +519,14 @@ static void __init free_unused_memmap(struct meminfo *mi)
#endif #endif
} }
#ifdef CONFIG_HIGHMEM
static inline void free_area_high(unsigned long pfn, unsigned long end)
{
for (; pfn < end; pfn++)
free_highmem_page(pfn_to_page(pfn));
}
#endif
static void __init free_highpages(void) static void __init free_highpages(void)
{ {
#ifdef CONFIG_HIGHMEM #ifdef CONFIG_HIGHMEM
...@@ -569,8 +562,7 @@ static void __init free_highpages(void) ...@@ -569,8 +562,7 @@ static void __init free_highpages(void)
if (res_end > end) if (res_end > end)
res_end = end; res_end = end;
if (res_start != start) if (res_start != start)
totalhigh_pages += free_area(start, res_start, free_area_high(start, res_start);
NULL);
start = res_end; start = res_end;
if (start == end) if (start == end)
break; break;
...@@ -578,9 +570,8 @@ static void __init free_highpages(void) ...@@ -578,9 +570,8 @@ static void __init free_highpages(void)
/* And now free anything which remains */ /* And now free anything which remains */
if (start < end) if (start < end)
totalhigh_pages += free_area(start, end, NULL); free_area_high(start, end);
} }
totalram_pages += totalhigh_pages;
#endif #endif
} }
...@@ -609,8 +600,7 @@ void __init mem_init(void) ...@@ -609,8 +600,7 @@ void __init mem_init(void)
#ifdef CONFIG_SA1111 #ifdef CONFIG_SA1111
/* now that our DMA memory is actually so designated, we can free it */ /* now that our DMA memory is actually so designated, we can free it */
totalram_pages += free_area(PHYS_PFN_OFFSET, free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL);
__phys_to_pfn(__pa(swapper_pg_dir)), NULL);
#endif #endif
free_highpages(); free_highpages();
...@@ -738,16 +728,12 @@ void free_initmem(void) ...@@ -738,16 +728,12 @@ void free_initmem(void)
extern char __tcm_start, __tcm_end; extern char __tcm_start, __tcm_end;
poison_init_mem(&__tcm_start, &__tcm_end - &__tcm_start); poison_init_mem(&__tcm_start, &__tcm_end - &__tcm_start);
totalram_pages += free_area(__phys_to_pfn(__pa(&__tcm_start)), free_reserved_area(&__tcm_start, &__tcm_end, 0, "TCM link");
__phys_to_pfn(__pa(&__tcm_end)),
"TCM link");
#endif #endif
poison_init_mem(__init_begin, __init_end - __init_begin); poison_init_mem(__init_begin, __init_end - __init_begin);
if (!machine_is_integrator() && !machine_is_cintegrator()) if (!machine_is_integrator() && !machine_is_cintegrator())
totalram_pages += free_area(__phys_to_pfn(__pa(__init_begin)), free_initmem_default(0);
__phys_to_pfn(__pa(__init_end)),
"init");
} }
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
...@@ -758,9 +744,7 @@ void free_initrd_mem(unsigned long start, unsigned long end) ...@@ -758,9 +744,7 @@ void free_initrd_mem(unsigned long start, unsigned long end)
{ {
if (!keep_initrd) { if (!keep_initrd) {
poison_init_mem((void *)start, PAGE_ALIGN(end) - start); poison_init_mem((void *)start, PAGE_ALIGN(end) - start);
totalram_pages += free_area(__phys_to_pfn(__pa(start)), free_reserved_area(start, end, 0, "initrd");
__phys_to_pfn(__pa(end)),
"initrd");
} }
} }
......
...@@ -197,24 +197,6 @@ void __init bootmem_init(void) ...@@ -197,24 +197,6 @@ void __init bootmem_init(void)
max_pfn = max_low_pfn = max; max_pfn = max_low_pfn = max;
} }
static inline int free_area(unsigned long pfn, unsigned long end, char *s)
{
unsigned int pages = 0, size = (end - pfn) << (PAGE_SHIFT - 10);
for (; pfn < end; pfn++) {
struct page *page = pfn_to_page(pfn);
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
pages++;
}
if (size && s)
pr_info("Freeing %s memory: %dK\n", s, size);
return pages;
}
/* /*
* Poison init memory with an undefined instruction (0x0). * Poison init memory with an undefined instruction (0x0).
*/ */
...@@ -405,9 +387,7 @@ void __init mem_init(void) ...@@ -405,9 +387,7 @@ void __init mem_init(void)
void free_initmem(void) void free_initmem(void)
{ {
poison_init_mem(__init_begin, __init_end - __init_begin); poison_init_mem(__init_begin, __init_end - __init_begin);
totalram_pages += free_area(__phys_to_pfn(__pa(__init_begin)), free_initmem_default(0);
__phys_to_pfn(__pa(__init_end)),
"init");
} }
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
...@@ -418,9 +398,7 @@ void free_initrd_mem(unsigned long start, unsigned long end) ...@@ -418,9 +398,7 @@ void free_initrd_mem(unsigned long start, unsigned long end)
{ {
if (!keep_initrd) { if (!keep_initrd) {
poison_init_mem((void *)start, PAGE_ALIGN(end) - start); poison_init_mem((void *)start, PAGE_ALIGN(end) - start);
totalram_pages += free_area(__phys_to_pfn(__pa(start)), free_reserved_area(start, end, 0, "initrd");
__phys_to_pfn(__pa(end)),
"initrd");
} }
} }
......
...@@ -391,17 +391,14 @@ int kern_addr_valid(unsigned long addr) ...@@ -391,17 +391,14 @@ int kern_addr_valid(unsigned long addr)
} }
#ifdef CONFIG_SPARSEMEM_VMEMMAP #ifdef CONFIG_SPARSEMEM_VMEMMAP
#ifdef CONFIG_ARM64_64K_PAGES #ifdef CONFIG_ARM64_64K_PAGES
int __meminit vmemmap_populate(struct page *start_page, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
unsigned long size, int node)
{ {
return vmemmap_populate_basepages(start_page, size, node); return vmemmap_populate_basepages(start, end, node);
} }
#else /* !CONFIG_ARM64_64K_PAGES */ #else /* !CONFIG_ARM64_64K_PAGES */
int __meminit vmemmap_populate(struct page *start_page, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
unsigned long size, int node)
{ {
unsigned long addr = (unsigned long)start_page; unsigned long addr = start;
unsigned long end = (unsigned long)(start_page + size);
unsigned long next; unsigned long next;
pgd_t *pgd; pgd_t *pgd;
pud_t *pud; pud_t *pud;
...@@ -434,7 +431,7 @@ int __meminit vmemmap_populate(struct page *start_page, ...@@ -434,7 +431,7 @@ int __meminit vmemmap_populate(struct page *start_page,
return 0; return 0;
} }
#endif /* CONFIG_ARM64_64K_PAGES */ #endif /* CONFIG_ARM64_64K_PAGES */
void vmemmap_free(struct page *memmap, unsigned long nr_pages) void vmemmap_free(unsigned long start, unsigned long end)
{ {
} }
#endif /* CONFIG_SPARSEMEM_VMEMMAP */ #endif /* CONFIG_SPARSEMEM_VMEMMAP */
...@@ -146,34 +146,14 @@ void __init mem_init(void) ...@@ -146,34 +146,14 @@ void __init mem_init(void)
initsize >> 10); initsize >> 10);
} }
static inline void free_area(unsigned long addr, unsigned long end, char *s)
{
unsigned int size = (end - addr) >> 10;
for (; addr < end; addr += PAGE_SIZE) {
struct page *page = virt_to_page(addr);
ClearPageReserved(page);
init_page_count(page);
free_page(addr);
totalram_pages++;
}
if (size && s)
printk(KERN_INFO "Freeing %s memory: %dK (%lx - %lx)\n",
s, size, end - (size << 10), end);
}
void free_initmem(void) void free_initmem(void)
{ {
free_area((unsigned long)__init_begin, (unsigned long)__init_end, free_initmem_default(0);
"init");
} }
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
free_area(start, end, "initrd"); free_reserved_area(start, end, 0, "initrd");
} }
#endif #endif
...@@ -103,7 +103,7 @@ void __init mem_init(void) ...@@ -103,7 +103,7 @@ void __init mem_init(void)
max_mapnr = num_physpages = MAP_NR(high_memory); max_mapnr = num_physpages = MAP_NR(high_memory);
printk(KERN_DEBUG "Kernel managed physical pages: %lu\n", num_physpages); printk(KERN_DEBUG "Kernel managed physical pages: %lu\n", num_physpages);
/* This will put all memory onto the freelists. */ /* This will put all low memory onto the freelists. */
totalram_pages = free_all_bootmem(); totalram_pages = free_all_bootmem();
reservedpages = 0; reservedpages = 0;
...@@ -129,24 +129,11 @@ void __init mem_init(void) ...@@ -129,24 +129,11 @@ void __init mem_init(void)
initk, codek, datak, DMA_UNCACHED_REGION >> 10, (reservedpages << (PAGE_SHIFT-10))); initk, codek, datak, DMA_UNCACHED_REGION >> 10, (reservedpages << (PAGE_SHIFT-10)));
} }
static void __init free_init_pages(const char *what, unsigned long begin, unsigned long end)
{
unsigned long addr;
/* next to check that the page we free is not a partial page */
for (addr = begin; addr + PAGE_SIZE <= end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing %s: %ldk freed\n", what, (end - begin) >> 10);
}
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end) void __init free_initrd_mem(unsigned long start, unsigned long end)
{ {
#ifndef CONFIG_MPU #ifndef CONFIG_MPU
free_init_pages("initrd memory", start, end); free_reserved_area(start, end, 0, "initrd");
#endif #endif
} }
#endif #endif
...@@ -154,10 +141,7 @@ void __init free_initrd_mem(unsigned long start, unsigned long end) ...@@ -154,10 +141,7 @@ void __init free_initrd_mem(unsigned long start, unsigned long end)
void __init_refok free_initmem(void) void __init_refok free_initmem(void)
{ {
#if defined CONFIG_RAMKERNEL && !defined CONFIG_MPU #if defined CONFIG_RAMKERNEL && !defined CONFIG_MPU
free_init_pages("unused kernel memory", free_initmem_default(0);
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
if (memory_start == (unsigned long)(&__init_end)) if (memory_start == (unsigned long)(&__init_end))
memory_start = (unsigned long)(&__init_begin); memory_start = (unsigned long)(&__init_begin);
#endif #endif
......
...@@ -77,37 +77,11 @@ void __init mem_init(void) ...@@ -77,37 +77,11 @@ void __init mem_init(void)
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end) void __init free_initrd_mem(unsigned long start, unsigned long end)
{ {
int pages = 0; free_reserved_area(start, end, 0, "initrd");
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
pages++;
}
printk(KERN_INFO "Freeing initrd memory: %luk freed\n",
(pages * PAGE_SIZE) >> 10);
} }
#endif #endif
void __init free_initmem(void) void __init free_initmem(void)
{ {
unsigned long addr; free_initmem_default(0);
/*
* The following code should be cool even if these sections
* are not page aligned.
*/
addr = PAGE_ALIGN((unsigned long)(__init_begin));
/* next to check that the page we free is not a partial page */
for (; addr + PAGE_SIZE < (unsigned long)(__init_end);
addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing unused kernel memory: %dK freed\n",
(int) ((addr - PAGE_ALIGN((long) &__init_begin)) >> 10));
} }
...@@ -12,12 +12,10 @@ ...@@ -12,12 +12,10 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <asm/tlb.h> #include <asm/tlb.h>
#include <asm/sections.h>
unsigned long empty_zero_page; unsigned long empty_zero_page;
extern char _stext, _edata, _etext; /* From linkerscript */
extern char __init_begin, __init_end;
void __init void __init
mem_init(void) mem_init(void)
{ {
...@@ -67,15 +65,5 @@ mem_init(void) ...@@ -67,15 +65,5 @@ mem_init(void)
void void
free_initmem(void) free_initmem(void)
{ {
unsigned long addr; free_initmem_default(0);
addr = (unsigned long)(&__init_begin);
for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk (KERN_INFO "Freeing unused kernel memory: %luk freed\n",
(unsigned long)((&__init_end - &__init_begin) >> 10));
} }
...@@ -122,7 +122,7 @@ void __init mem_init(void) ...@@ -122,7 +122,7 @@ void __init mem_init(void)
#endif #endif
int codek = 0, datak = 0; int codek = 0, datak = 0;
/* this will put all memory onto the freelists */ /* this will put all low memory onto the freelists */
totalram_pages = free_all_bootmem(); totalram_pages = free_all_bootmem();
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
...@@ -131,14 +131,8 @@ void __init mem_init(void) ...@@ -131,14 +131,8 @@ void __init mem_init(void)
datapages++; datapages++;
#ifdef CONFIG_HIGHMEM #ifdef CONFIG_HIGHMEM
for (pfn = num_physpages - 1; pfn >= num_mappedpages; pfn--) { for (pfn = num_physpages - 1; pfn >= num_mappedpages; pfn--)
struct page *page = &mem_map[pfn]; free_highmem_page(&mem_map[pfn]);
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
totalram_pages++;
}
#endif #endif
codek = ((unsigned long) &_etext - (unsigned long) &_stext) >> 10; codek = ((unsigned long) &_etext - (unsigned long) &_stext) >> 10;
...@@ -168,21 +162,7 @@ void __init mem_init(void) ...@@ -168,21 +162,7 @@ void __init mem_init(void)
void free_initmem(void) void free_initmem(void)
{ {
#if defined(CONFIG_RAMKERNEL) && !defined(CONFIG_PROTECT_KERNEL) #if defined(CONFIG_RAMKERNEL) && !defined(CONFIG_PROTECT_KERNEL)
unsigned long start, end, addr; free_initmem_default(0);
start = PAGE_ALIGN((unsigned long) &__init_begin); /* round up */
end = ((unsigned long) &__init_end) & PAGE_MASK; /* round down */
/* next to check that the page we free is not a partial page */
for (addr = start; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk("Freeing unused kernel memory: %ldKiB freed (0x%lx - 0x%lx)\n",
(end - start) >> 10, start, end);
#endif #endif
} /* end free_initmem() */ } /* end free_initmem() */
...@@ -193,14 +173,6 @@ void free_initmem(void) ...@@ -193,14 +173,6 @@ void free_initmem(void)
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end) void __init free_initrd_mem(unsigned long start, unsigned long end)
{ {
int pages = 0; free_reserved_area(start, end, 0, "initrd");
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
pages++;
}
printk("Freeing initrd memory: %dKiB freed\n", (pages * PAGE_SIZE) >> 10);
} /* end free_initrd_mem() */ } /* end free_initrd_mem() */
#endif #endif
...@@ -139,7 +139,7 @@ void __init mem_init(void) ...@@ -139,7 +139,7 @@ void __init mem_init(void)
start_mem = PAGE_ALIGN(start_mem); start_mem = PAGE_ALIGN(start_mem);
max_mapnr = num_physpages = MAP_NR(high_memory); max_mapnr = num_physpages = MAP_NR(high_memory);
/* this will put all memory onto the freelists */ /* this will put all low memory onto the freelists */
totalram_pages = free_all_bootmem(); totalram_pages = free_all_bootmem();
codek = (_etext - _stext) >> 10; codek = (_etext - _stext) >> 10;
...@@ -161,15 +161,7 @@ void __init mem_init(void) ...@@ -161,15 +161,7 @@ void __init mem_init(void)
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
int pages = 0; free_reserved_area(start, end, 0, "initrd");
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
pages++;
}
printk ("Freeing initrd memory: %dk freed\n", pages);
} }
#endif #endif
...@@ -177,23 +169,7 @@ void ...@@ -177,23 +169,7 @@ void
free_initmem(void) free_initmem(void)
{ {
#ifdef CONFIG_RAMKERNEL #ifdef CONFIG_RAMKERNEL
unsigned long addr; free_initmem_default(0);
/*
* the following code should be cool even if these sections
* are not page aligned.
*/
addr = PAGE_ALIGN((unsigned long)(__init_begin));
/* next to check that the page we free is not a partial page */
for (; addr + PAGE_SIZE < (unsigned long)__init_end; addr +=PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing unused kernel memory: %ldk freed (0x%x - 0x%x)\n",
(addr - PAGE_ALIGN((long) __init_begin)) >> 10,
(int)(PAGE_ALIGN((unsigned long)__init_begin)),
(int)(addr - PAGE_SIZE));
#endif #endif
} }
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#define _ASM_IA64_HUGETLB_H #define _ASM_IA64_HUGETLB_H
#include <asm/page.h> #include <asm/page.h>
#include <asm-generic/hugetlb.h>
void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr, void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
......
...@@ -47,6 +47,8 @@ void show_mem(unsigned int filter) ...@@ -47,6 +47,8 @@ void show_mem(unsigned int filter)
printk(KERN_INFO "Mem-info:\n"); printk(KERN_INFO "Mem-info:\n");
show_free_areas(filter); show_free_areas(filter);
printk(KERN_INFO "Node memory in pages:\n"); printk(KERN_INFO "Node memory in pages:\n");
if (filter & SHOW_MEM_FILTER_PAGE_COUNT)
return;
for_each_online_pgdat(pgdat) { for_each_online_pgdat(pgdat) {
unsigned long present; unsigned long present;
unsigned long flags; unsigned long flags;
......
...@@ -623,6 +623,8 @@ void show_mem(unsigned int filter) ...@@ -623,6 +623,8 @@ void show_mem(unsigned int filter)
printk(KERN_INFO "Mem-info:\n"); printk(KERN_INFO "Mem-info:\n");
show_free_areas(filter); show_free_areas(filter);
if (filter & SHOW_MEM_FILTER_PAGE_COUNT)
return;
printk(KERN_INFO "Node memory in pages:\n"); printk(KERN_INFO "Node memory in pages:\n");
for_each_online_pgdat(pgdat) { for_each_online_pgdat(pgdat) {
unsigned long present; unsigned long present;
...@@ -817,13 +819,12 @@ void arch_refresh_nodedata(int update_node, pg_data_t *update_pgdat) ...@@ -817,13 +819,12 @@ void arch_refresh_nodedata(int update_node, pg_data_t *update_pgdat)
#endif #endif
#ifdef CONFIG_SPARSEMEM_VMEMMAP #ifdef CONFIG_SPARSEMEM_VMEMMAP
int __meminit vmemmap_populate(struct page *start_page, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
unsigned long size, int node)
{ {
return vmemmap_populate_basepages(start_page, size, node); return vmemmap_populate_basepages(start, end, node);
} }
void vmemmap_free(struct page *memmap, unsigned long nr_pages) void vmemmap_free(unsigned long start, unsigned long end)
{ {
} }
#endif #endif
...@@ -154,25 +154,14 @@ ia64_init_addr_space (void) ...@@ -154,25 +154,14 @@ ia64_init_addr_space (void)
void void
free_initmem (void) free_initmem (void)
{ {
unsigned long addr, eaddr; free_reserved_area((unsigned long)ia64_imva(__init_begin),
(unsigned long)ia64_imva(__init_end),
addr = (unsigned long) ia64_imva(__init_begin); 0, "unused kernel");
eaddr = (unsigned long) ia64_imva(__init_end);
while (addr < eaddr) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
++totalram_pages;
addr += PAGE_SIZE;
}
printk(KERN_INFO "Freeing unused kernel memory: %ldkB freed\n",
(__init_end - __init_begin) >> 10);
} }
void __init void __init
free_initrd_mem (unsigned long start, unsigned long end) free_initrd_mem (unsigned long start, unsigned long end)
{ {
struct page *page;
/* /*
* EFI uses 4KB pages while the kernel can use 4KB or bigger. * EFI uses 4KB pages while the kernel can use 4KB or bigger.
* Thus EFI and the kernel may have different page sizes. It is * Thus EFI and the kernel may have different page sizes. It is
...@@ -213,11 +202,7 @@ free_initrd_mem (unsigned long start, unsigned long end) ...@@ -213,11 +202,7 @@ free_initrd_mem (unsigned long start, unsigned long end)
for (; start < end; start += PAGE_SIZE) { for (; start < end; start += PAGE_SIZE) {
if (!virt_addr_valid(start)) if (!virt_addr_valid(start))
continue; continue;
page = virt_to_page(start); free_reserved_page(virt_to_page(start));
ClearPageReserved(page);
init_page_count(page);
free_page(start);
++totalram_pages;
} }
} }
......
...@@ -61,13 +61,26 @@ paddr_to_nid(unsigned long paddr) ...@@ -61,13 +61,26 @@ paddr_to_nid(unsigned long paddr)
int __meminit __early_pfn_to_nid(unsigned long pfn) int __meminit __early_pfn_to_nid(unsigned long pfn)
{ {
int i, section = pfn >> PFN_SECTION_SHIFT, ssec, esec; int i, section = pfn >> PFN_SECTION_SHIFT, ssec, esec;
/*
* NOTE: The following SMP-unsafe globals are only used early in boot
* when the kernel is running single-threaded.
*/
static int __meminitdata last_ssec, last_esec;
static int __meminitdata last_nid;
if (section >= last_ssec && section < last_esec)
return last_nid;
for (i = 0; i < num_node_memblks; i++) { for (i = 0; i < num_node_memblks; i++) {
ssec = node_memblk[i].start_paddr >> PA_SECTION_SHIFT; ssec = node_memblk[i].start_paddr >> PA_SECTION_SHIFT;
esec = (node_memblk[i].start_paddr + node_memblk[i].size + esec = (node_memblk[i].start_paddr + node_memblk[i].size +
((1L << PA_SECTION_SHIFT) - 1)) >> PA_SECTION_SHIFT; ((1L << PA_SECTION_SHIFT) - 1)) >> PA_SECTION_SHIFT;
if (section >= ssec && section < esec) if (section >= ssec && section < esec) {
last_ssec = ssec;
last_esec = esec;
last_nid = node_memblk[i].nid;
return node_memblk[i].nid; return node_memblk[i].nid;
}
} }
return -1; return -1;
......
...@@ -28,10 +28,7 @@ ...@@ -28,10 +28,7 @@
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/tlb.h> #include <asm/tlb.h>
#include <asm/sections.h>
/* References to section boundaries */
extern char _text, _etext, _edata;
extern char __init_begin, __init_end;
pgd_t swapper_pg_dir[1024]; pgd_t swapper_pg_dir[1024];
...@@ -184,17 +181,7 @@ void __init mem_init(void) ...@@ -184,17 +181,7 @@ void __init mem_init(void)
*======================================================================*/ *======================================================================*/
void free_initmem(void) void free_initmem(void)
{ {
unsigned long addr; free_initmem_default(0);
addr = (unsigned long)(&__init_begin);
for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk (KERN_INFO "Freeing unused kernel memory: %dk freed\n", \
(int)(&__init_end - &__init_begin) >> 10);
} }
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
...@@ -204,13 +191,6 @@ void free_initmem(void) ...@@ -204,13 +191,6 @@ void free_initmem(void)
*======================================================================*/ *======================================================================*/
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
unsigned long p; free_reserved_area(start, end, 0, "initrd");
for (p = start; p < end; p += PAGE_SIZE) {
ClearPageReserved(virt_to_page(p));
init_page_count(virt_to_page(p));
free_page(p);
totalram_pages++;
}
printk (KERN_INFO "Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
} }
#endif #endif
...@@ -110,18 +110,7 @@ void __init paging_init(void) ...@@ -110,18 +110,7 @@ void __init paging_init(void)
void free_initmem(void) void free_initmem(void)
{ {
#ifndef CONFIG_MMU_SUN3 #ifndef CONFIG_MMU_SUN3
unsigned long addr; free_initmem_default(0);
addr = (unsigned long) __init_begin;
for (; addr < ((unsigned long) __init_end); addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
pr_notice("Freeing unused kernel memory: %luk freed (0x%x - 0x%x)\n",
(addr - (unsigned long) __init_begin) >> 10,
(unsigned int) __init_begin, (unsigned int) __init_end);
#endif /* CONFIG_MMU_SUN3 */ #endif /* CONFIG_MMU_SUN3 */
} }
...@@ -213,15 +202,6 @@ void __init mem_init(void) ...@@ -213,15 +202,6 @@ void __init mem_init(void)
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
int pages = 0; free_reserved_area(start, end, 0, "initrd");
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
pages++;
}
pr_notice("Freeing initrd memory: %dk freed\n",
pages << (PAGE_SHIFT - 10));
} }
#endif #endif
...@@ -380,14 +380,8 @@ void __init mem_init(void) ...@@ -380,14 +380,8 @@ void __init mem_init(void)
#ifdef CONFIG_HIGHMEM #ifdef CONFIG_HIGHMEM
unsigned long tmp; unsigned long tmp;
for (tmp = highstart_pfn; tmp < highend_pfn; tmp++) { for (tmp = highstart_pfn; tmp < highend_pfn; tmp++)
struct page *page = pfn_to_page(tmp); free_highmem_page(pfn_to_page(tmp));
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
}
totalram_pages += totalhigh_pages;
num_physpages += totalhigh_pages; num_physpages += totalhigh_pages;
#endif /* CONFIG_HIGHMEM */ #endif /* CONFIG_HIGHMEM */
...@@ -412,32 +406,15 @@ void __init mem_init(void) ...@@ -412,32 +406,15 @@ void __init mem_init(void)
return; return;
} }
static void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr;
for (addr = begin; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
pr_info("Freeing %s: %luk freed\n", what, (end - begin) >> 10);
}
void free_initmem(void) void free_initmem(void)
{ {
free_init_pages("unused kernel memory", free_initmem_default(POISON_FREE_INITMEM);
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
} }
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
end = end & PAGE_MASK; free_reserved_area(start, end, POISON_FREE_INITMEM, "initrd");
free_init_pages("initrd memory", start, end);
} }
#endif #endif
......
...@@ -46,7 +46,6 @@ void machine_shutdown(void); ...@@ -46,7 +46,6 @@ void machine_shutdown(void);
void machine_halt(void); void machine_halt(void);
void machine_power_off(void); void machine_power_off(void);
void free_init_pages(char *what, unsigned long begin, unsigned long end);
extern void *alloc_maybe_bootmem(size_t size, gfp_t mask); extern void *alloc_maybe_bootmem(size_t size, gfp_t mask);
extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask); extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
......
...@@ -82,13 +82,9 @@ static unsigned long highmem_setup(void) ...@@ -82,13 +82,9 @@ static unsigned long highmem_setup(void)
/* FIXME not sure about */ /* FIXME not sure about */
if (memblock_is_reserved(pfn << PAGE_SHIFT)) if (memblock_is_reserved(pfn << PAGE_SHIFT))
continue; continue;
ClearPageReserved(page); free_highmem_page(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
reservedpages++; reservedpages++;
} }
totalram_pages += totalhigh_pages;
pr_info("High memory: %luk\n", pr_info("High memory: %luk\n",
totalhigh_pages << (PAGE_SHIFT-10)); totalhigh_pages << (PAGE_SHIFT-10));
...@@ -236,40 +232,16 @@ void __init setup_memory(void) ...@@ -236,40 +232,16 @@ void __init setup_memory(void)
paging_init(); paging_init();
} }
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr;
for (addr = begin; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
pr_info("Freeing %s: %ldk freed\n", what, (end - begin) >> 10);
}
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
int pages = 0; free_reserved_area(start, end, 0, "initrd");
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
pages++;
}
pr_notice("Freeing initrd memory: %dk freed\n",
(int)(pages * (PAGE_SIZE / 1024)));
} }
#endif #endif
void free_initmem(void) void free_initmem(void)
{ {
free_init_pages("unused kernel memory", free_initmem_default(0);
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
} }
void __init mem_init(void) void __init mem_init(void)
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#define __ASM_HUGETLB_H #define __ASM_HUGETLB_H
#include <asm/page.h> #include <asm/page.h>
#include <asm-generic/hugetlb.h>
static inline int is_hugepage_only_range(struct mm_struct *mm, static inline int is_hugepage_only_range(struct mm_struct *mm,
......
...@@ -77,10 +77,9 @@ EXPORT_SYMBOL_GPL(empty_zero_page); ...@@ -77,10 +77,9 @@ EXPORT_SYMBOL_GPL(empty_zero_page);
/* /*
* Not static inline because used by IP27 special magic initialization code * Not static inline because used by IP27 special magic initialization code
*/ */
unsigned long setup_zero_pages(void) void setup_zero_pages(void)
{ {
unsigned int order; unsigned int order, i;
unsigned long size;
struct page *page; struct page *page;
if (cpu_has_vce) if (cpu_has_vce)
...@@ -94,15 +93,10 @@ unsigned long setup_zero_pages(void) ...@@ -94,15 +93,10 @@ unsigned long setup_zero_pages(void)
page = virt_to_page((void *)empty_zero_page); page = virt_to_page((void *)empty_zero_page);
split_page(page, order); split_page(page, order);
while (page < virt_to_page((void *)(empty_zero_page + (PAGE_SIZE << order)))) { for (i = 0; i < (1 << order); i++, page++)
SetPageReserved(page); mark_page_reserved(page);
page++;
}
size = PAGE_SIZE << order;
zero_page_mask = (size - 1) & PAGE_MASK;
return 1UL << order; zero_page_mask = ((PAGE_SIZE << order) - 1) & PAGE_MASK;
} }
#ifdef CONFIG_MIPS_MT_SMTC #ifdef CONFIG_MIPS_MT_SMTC
...@@ -380,7 +374,7 @@ void __init mem_init(void) ...@@ -380,7 +374,7 @@ void __init mem_init(void)
high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT); high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
totalram_pages += free_all_bootmem(); totalram_pages += free_all_bootmem();
totalram_pages -= setup_zero_pages(); /* Setup zeroed pages. */ setup_zero_pages(); /* Setup zeroed pages. */
reservedpages = ram = 0; reservedpages = ram = 0;
for (tmp = 0; tmp < max_low_pfn; tmp++) for (tmp = 0; tmp < max_low_pfn; tmp++)
...@@ -399,12 +393,8 @@ void __init mem_init(void) ...@@ -399,12 +393,8 @@ void __init mem_init(void)
SetPageReserved(page); SetPageReserved(page);
continue; continue;
} }
ClearPageReserved(page); free_highmem_page(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
} }
totalram_pages += totalhigh_pages;
num_physpages += totalhigh_pages; num_physpages += totalhigh_pages;
#endif #endif
...@@ -440,11 +430,8 @@ void free_init_pages(const char *what, unsigned long begin, unsigned long end) ...@@ -440,11 +430,8 @@ void free_init_pages(const char *what, unsigned long begin, unsigned long end)
struct page *page = pfn_to_page(pfn); struct page *page = pfn_to_page(pfn);
void *addr = phys_to_virt(PFN_PHYS(pfn)); void *addr = phys_to_virt(PFN_PHYS(pfn));
ClearPageReserved(page);
init_page_count(page);
memset(addr, POISON_FREE_INITMEM, PAGE_SIZE); memset(addr, POISON_FREE_INITMEM, PAGE_SIZE);
__free_page(page); free_reserved_page(page);
totalram_pages++;
} }
printk(KERN_INFO "Freeing %s: %ldk freed\n", what, (end - begin) >> 10); printk(KERN_INFO "Freeing %s: %ldk freed\n", what, (end - begin) >> 10);
} }
...@@ -452,18 +439,14 @@ void free_init_pages(const char *what, unsigned long begin, unsigned long end) ...@@ -452,18 +439,14 @@ void free_init_pages(const char *what, unsigned long begin, unsigned long end)
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
free_init_pages("initrd memory", free_reserved_area(start, end, POISON_FREE_INITMEM, "initrd");
virt_to_phys((void *)start),
virt_to_phys((void *)end));
} }
#endif #endif
void __init_refok free_initmem(void) void __init_refok free_initmem(void)
{ {
prom_free_prom_memory(); prom_free_prom_memory();
free_init_pages("unused kernel memory", free_initmem_default(POISON_FREE_INITMEM);
__pa_symbol(&__init_begin),
__pa_symbol(&__init_end));
} }
#ifndef CONFIG_MIPS_PGD_C0_CONTEXT #ifndef CONFIG_MIPS_PGD_C0_CONTEXT
......
...@@ -457,7 +457,7 @@ void __init prom_free_prom_memory(void) ...@@ -457,7 +457,7 @@ void __init prom_free_prom_memory(void)
/* We got nothing to free here ... */ /* We got nothing to free here ... */
} }
extern unsigned long setup_zero_pages(void); extern void setup_zero_pages(void);
void __init paging_init(void) void __init paging_init(void)
{ {
...@@ -492,7 +492,7 @@ void __init mem_init(void) ...@@ -492,7 +492,7 @@ void __init mem_init(void)
totalram_pages += free_all_bootmem_node(NODE_DATA(node)); totalram_pages += free_all_bootmem_node(NODE_DATA(node));
} }
totalram_pages -= setup_zero_pages(); /* This comes from node 0 */ setup_zero_pages(); /* This comes from node 0 */
codesize = (unsigned long) &_etext - (unsigned long) &_text; codesize = (unsigned long) &_etext - (unsigned long) &_text;
datasize = (unsigned long) &_edata - (unsigned long) &_etext; datasize = (unsigned long) &_edata - (unsigned long) &_etext;
......
...@@ -138,31 +138,12 @@ void __init mem_init(void) ...@@ -138,31 +138,12 @@ void __init mem_init(void)
totalhigh_pages << (PAGE_SHIFT - 10)); totalhigh_pages << (PAGE_SHIFT - 10));
} }
/*
*
*/
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr;
for (addr = begin; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *) addr, 0xcc, PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing %s: %ldk freed\n", what, (end - begin) >> 10);
}
/* /*
* recycle memory containing stuff only required for initialisation * recycle memory containing stuff only required for initialisation
*/ */
void free_initmem(void) void free_initmem(void)
{ {
free_init_pages("unused kernel memory", free_initmem_default(POISON_FREE_INITMEM);
(unsigned long) &__init_begin,
(unsigned long) &__init_end);
} }
/* /*
...@@ -171,6 +152,6 @@ void free_initmem(void) ...@@ -171,6 +152,6 @@ void free_initmem(void)
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
free_init_pages("initrd memory", start, end); free_reserved_area(start, end, POISON_FREE_INITMEM, "initrd");
} }
#endif #endif
...@@ -43,6 +43,7 @@ ...@@ -43,6 +43,7 @@
#include <asm/kmap_types.h> #include <asm/kmap_types.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/sections.h>
int mem_init_done; int mem_init_done;
...@@ -201,9 +202,6 @@ void __init paging_init(void) ...@@ -201,9 +202,6 @@ void __init paging_init(void)
/* References to section boundaries */ /* References to section boundaries */
extern char _stext, _etext, _edata, __bss_start, _end;
extern char __init_begin, __init_end;
static int __init free_pages_init(void) static int __init free_pages_init(void)
{ {
int reservedpages, pfn; int reservedpages, pfn;
...@@ -263,30 +261,11 @@ void __init mem_init(void) ...@@ -263,30 +261,11 @@ void __init mem_init(void)
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
printk(KERN_INFO "Freeing initrd memory: %ldk freed\n", free_reserved_area(start, end, 0, "initrd");
(end - start) >> 10);
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
}
} }
#endif #endif
void free_initmem(void) void free_initmem(void)
{ {
unsigned long addr; free_initmem_default(0);
addr = (unsigned long)(&__init_begin);
for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing unused kernel memory: %luk freed\n",
((unsigned long)&__init_end -
(unsigned long)&__init_begin) >> 10);
} }
...@@ -505,7 +505,6 @@ static void __init map_pages(unsigned long start_vaddr, ...@@ -505,7 +505,6 @@ static void __init map_pages(unsigned long start_vaddr,
void free_initmem(void) void free_initmem(void)
{ {
unsigned long addr;
unsigned long init_begin = (unsigned long)__init_begin; unsigned long init_begin = (unsigned long)__init_begin;
unsigned long init_end = (unsigned long)__init_end; unsigned long init_end = (unsigned long)__init_end;
...@@ -533,19 +532,10 @@ void free_initmem(void) ...@@ -533,19 +532,10 @@ void free_initmem(void)
* pages are no-longer executable */ * pages are no-longer executable */
flush_icache_range(init_begin, init_end); flush_icache_range(init_begin, init_end);
for (addr = init_begin; addr < init_end; addr += PAGE_SIZE) { num_physpages += free_initmem_default(0);
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
num_physpages++;
totalram_pages++;
}
/* set up a new led state on systems shipped LED State panel */ /* set up a new led state on systems shipped LED State panel */
pdc_chassis_send_status(PDC_CHASSIS_DIRECT_BCOMPLETE); pdc_chassis_send_status(PDC_CHASSIS_DIRECT_BCOMPLETE);
printk(KERN_INFO "Freeing unused kernel memory: %luk freed\n",
(init_end - init_begin) >> 10);
} }
...@@ -697,6 +687,8 @@ void show_mem(unsigned int filter) ...@@ -697,6 +687,8 @@ void show_mem(unsigned int filter)
printk(KERN_INFO "Mem-info:\n"); printk(KERN_INFO "Mem-info:\n");
show_free_areas(filter); show_free_areas(filter);
if (filter & SHOW_MEM_FILTER_PAGE_COUNT)
return;
#ifndef CONFIG_DISCONTIGMEM #ifndef CONFIG_DISCONTIGMEM
i = max_mapnr; i = max_mapnr;
while (i-- > 0) { while (i-- > 0) {
...@@ -1107,15 +1099,6 @@ void flush_tlb_all(void) ...@@ -1107,15 +1099,6 @@ void flush_tlb_all(void)
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
if (start >= end) num_physpages += free_reserved_area(start, end, 0, "initrd");
return;
printk(KERN_INFO "Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
num_physpages++;
totalram_pages++;
}
} }
#endif #endif
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
#ifdef CONFIG_HUGETLB_PAGE #ifdef CONFIG_HUGETLB_PAGE
#include <asm/page.h> #include <asm/page.h>
#include <asm-generic/hugetlb.h>
extern struct kmem_cache *hugepte_cache; extern struct kmem_cache *hugepte_cache;
......
...@@ -150,10 +150,7 @@ void crash_free_reserved_phys_range(unsigned long begin, unsigned long end) ...@@ -150,10 +150,7 @@ void crash_free_reserved_phys_range(unsigned long begin, unsigned long end)
if (addr <= rtas_end && ((addr + PAGE_SIZE) > rtas_start)) if (addr <= rtas_end && ((addr + PAGE_SIZE) > rtas_start))
continue; continue;
ClearPageReserved(pfn_to_page(addr >> PAGE_SHIFT)); free_reserved_page(pfn_to_page(addr >> PAGE_SHIFT));
init_page_count(pfn_to_page(addr >> PAGE_SHIFT));
free_page((unsigned long)__va(addr));
totalram_pages++;
} }
} }
#endif #endif
...@@ -1045,10 +1045,7 @@ static void fadump_release_memory(unsigned long begin, unsigned long end) ...@@ -1045,10 +1045,7 @@ static void fadump_release_memory(unsigned long begin, unsigned long end)
if (addr <= ra_end && ((addr + PAGE_SIZE) > ra_start)) if (addr <= ra_end && ((addr + PAGE_SIZE) > ra_start))
continue; continue;
ClearPageReserved(pfn_to_page(addr >> PAGE_SHIFT)); free_reserved_page(pfn_to_page(addr >> PAGE_SHIFT));
init_page_count(pfn_to_page(addr >> PAGE_SHIFT));
free_page((unsigned long)__va(addr));
totalram_pages++;
} }
} }
......
...@@ -756,12 +756,7 @@ static __init void kvm_free_tmp(void) ...@@ -756,12 +756,7 @@ static __init void kvm_free_tmp(void)
end = (ulong)&kvm_tmp[ARRAY_SIZE(kvm_tmp)] & PAGE_MASK; end = (ulong)&kvm_tmp[ARRAY_SIZE(kvm_tmp)] & PAGE_MASK;
/* Free the tmp space we don't need */ /* Free the tmp space we don't need */
for (; start < end; start += PAGE_SIZE) { free_reserved_area(start, end, 0, NULL);
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
}
} }
static int __init kvm_guest_init(void) static int __init kvm_guest_init(void)
......
...@@ -263,19 +263,14 @@ static __meminit void vmemmap_list_populate(unsigned long phys, ...@@ -263,19 +263,14 @@ static __meminit void vmemmap_list_populate(unsigned long phys,
vmemmap_list = vmem_back; vmemmap_list = vmem_back;
} }
int __meminit vmemmap_populate(struct page *start_page, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
unsigned long nr_pages, int node)
{ {
unsigned long start = (unsigned long)start_page;
unsigned long end = (unsigned long)(start_page + nr_pages);
unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift; unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift;
/* Align to the page size of the linear mapping. */ /* Align to the page size of the linear mapping. */
start = _ALIGN_DOWN(start, page_size); start = _ALIGN_DOWN(start, page_size);
pr_debug("vmemmap_populate page %p, %ld pages, node %d\n", pr_debug("vmemmap_populate %lx..%lx, node %d\n", start, end, node);
start_page, nr_pages, node);
pr_debug(" -> map %lx..%lx\n", start, end);
for (; start < end; start += page_size) { for (; start < end; start += page_size) {
void *p; void *p;
...@@ -298,7 +293,7 @@ int __meminit vmemmap_populate(struct page *start_page, ...@@ -298,7 +293,7 @@ int __meminit vmemmap_populate(struct page *start_page,
return 0; return 0;
} }
void vmemmap_free(struct page *memmap, unsigned long nr_pages) void vmemmap_free(unsigned long start, unsigned long end)
{ {
} }
......
...@@ -352,13 +352,9 @@ void __init mem_init(void) ...@@ -352,13 +352,9 @@ void __init mem_init(void)
struct page *page = pfn_to_page(pfn); struct page *page = pfn_to_page(pfn);
if (memblock_is_reserved(paddr)) if (memblock_is_reserved(paddr))
continue; continue;
ClearPageReserved(page); free_highmem_page(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
reservedpages--; reservedpages--;
} }
totalram_pages += totalhigh_pages;
printk(KERN_DEBUG "High memory: %luk\n", printk(KERN_DEBUG "High memory: %luk\n",
totalhigh_pages << (PAGE_SHIFT-10)); totalhigh_pages << (PAGE_SHIFT-10));
} }
...@@ -405,39 +401,14 @@ void __init mem_init(void) ...@@ -405,39 +401,14 @@ void __init mem_init(void)
void free_initmem(void) void free_initmem(void)
{ {
unsigned long addr;
ppc_md.progress = ppc_printk_progress; ppc_md.progress = ppc_printk_progress;
free_initmem_default(POISON_FREE_INITMEM);
addr = (unsigned long)__init_begin;
for (; addr < (unsigned long)__init_end; addr += PAGE_SIZE) {
memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
pr_info("Freeing unused kernel memory: %luk freed\n",
((unsigned long)__init_end -
(unsigned long)__init_begin) >> 10);
} }
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end) void __init free_initrd_mem(unsigned long start, unsigned long end)
{ {
if (start >= end) free_reserved_area(start, end, 0, "initrd");
return;
start = _ALIGN_DOWN(start, PAGE_SIZE);
end = _ALIGN_UP(end, PAGE_SIZE);
pr_info("Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
}
} }
#endif #endif
......
...@@ -62,14 +62,11 @@ static int distance_lookup_table[MAX_NUMNODES][MAX_DISTANCE_REF_POINTS]; ...@@ -62,14 +62,11 @@ static int distance_lookup_table[MAX_NUMNODES][MAX_DISTANCE_REF_POINTS];
*/ */
static void __init setup_node_to_cpumask_map(void) static void __init setup_node_to_cpumask_map(void)
{ {
unsigned int node, num = 0; unsigned int node;
/* setup nr_node_ids if not done yet */ /* setup nr_node_ids if not done yet */
if (nr_node_ids == MAX_NUMNODES) { if (nr_node_ids == MAX_NUMNODES)
for_each_node_mask(node, node_possible_map) setup_nr_node_ids();
num = node;
nr_node_ids = num + 1;
}
/* allocate the map */ /* allocate the map */
for (node = 0; node < nr_node_ids; node++) for (node = 0; node < nr_node_ids; node++)
......
...@@ -172,12 +172,9 @@ static struct fsl_diu_shared_fb __attribute__ ((__aligned__(8))) diu_shared_fb; ...@@ -172,12 +172,9 @@ static struct fsl_diu_shared_fb __attribute__ ((__aligned__(8))) diu_shared_fb;
static inline void mpc512x_free_bootmem(struct page *page) static inline void mpc512x_free_bootmem(struct page *page)
{ {
__ClearPageReserved(page);
BUG_ON(PageTail(page)); BUG_ON(PageTail(page));
BUG_ON(atomic_read(&page->_count) > 1); BUG_ON(atomic_read(&page->_count) > 1);
atomic_set(&page->_count, 1); free_reserved_page(page);
__free_page(page);
totalram_pages++;
} }
void mpc512x_release_bootmem(void) void mpc512x_release_bootmem(void)
......
...@@ -72,6 +72,7 @@ unsigned long memory_block_size_bytes(void) ...@@ -72,6 +72,7 @@ unsigned long memory_block_size_bytes(void)
return get_memblock_size(); return get_memblock_size();
} }
#ifdef CONFIG_MEMORY_HOTREMOVE
static int pseries_remove_memblock(unsigned long base, unsigned int memblock_size) static int pseries_remove_memblock(unsigned long base, unsigned int memblock_size)
{ {
unsigned long start, start_pfn; unsigned long start, start_pfn;
...@@ -153,6 +154,17 @@ static int pseries_remove_memory(struct device_node *np) ...@@ -153,6 +154,17 @@ static int pseries_remove_memory(struct device_node *np)
ret = pseries_remove_memblock(base, lmb_size); ret = pseries_remove_memblock(base, lmb_size);
return ret; return ret;
} }
#else
static inline int pseries_remove_memblock(unsigned long base,
unsigned int memblock_size)
{
return -EOPNOTSUPP;
}
static inline int pseries_remove_memory(struct device_node *np)
{
return -EOPNOTSUPP;
}
#endif /* CONFIG_MEMORY_HOTREMOVE */
static int pseries_add_memory(struct device_node *np) static int pseries_add_memory(struct device_node *np)
{ {
......
...@@ -114,7 +114,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, ...@@ -114,7 +114,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
#define huge_ptep_set_wrprotect(__mm, __addr, __ptep) \ #define huge_ptep_set_wrprotect(__mm, __addr, __ptep) \
({ \ ({ \
pte_t __pte = huge_ptep_get(__ptep); \ pte_t __pte = huge_ptep_get(__ptep); \
if (pte_write(__pte)) { \ if (huge_pte_write(__pte)) { \
huge_ptep_invalidate(__mm, __addr, __ptep); \ huge_ptep_invalidate(__mm, __addr, __ptep); \
set_huge_pte_at(__mm, __addr, __ptep, \ set_huge_pte_at(__mm, __addr, __ptep, \
huge_pte_wrprotect(__pte)); \ huge_pte_wrprotect(__pte)); \
...@@ -127,4 +127,58 @@ static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, ...@@ -127,4 +127,58 @@ static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
huge_ptep_invalidate(vma->vm_mm, address, ptep); huge_ptep_invalidate(vma->vm_mm, address, ptep);
} }
static inline pte_t mk_huge_pte(struct page *page, pgprot_t pgprot)
{
pte_t pte;
pmd_t pmd;
pmd = mk_pmd_phys(page_to_phys(page), pgprot);
pte_val(pte) = pmd_val(pmd);
return pte;
}
static inline int huge_pte_write(pte_t pte)
{
pmd_t pmd;
pmd_val(pmd) = pte_val(pte);
return pmd_write(pmd);
}
static inline int huge_pte_dirty(pte_t pte)
{
/* No dirty bit in the segment table entry. */
return 0;
}
static inline pte_t huge_pte_mkwrite(pte_t pte)
{
pmd_t pmd;
pmd_val(pmd) = pte_val(pte);
pte_val(pte) = pmd_val(pmd_mkwrite(pmd));
return pte;
}
static inline pte_t huge_pte_mkdirty(pte_t pte)
{
/* No dirty bit in the segment table entry. */
return pte;
}
static inline pte_t huge_pte_modify(pte_t pte, pgprot_t newprot)
{
pmd_t pmd;
pmd_val(pmd) = pte_val(pte);
pte_val(pte) = pmd_val(pmd_modify(pmd, newprot));
return pte;
}
static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep)
{
pmd_clear((pmd_t *) ptep);
}
#endif /* _ASM_S390_HUGETLB_H */ #endif /* _ASM_S390_HUGETLB_H */
...@@ -424,6 +424,13 @@ extern unsigned long MODULES_END; ...@@ -424,6 +424,13 @@ extern unsigned long MODULES_END;
#define __S110 PAGE_RW #define __S110 PAGE_RW
#define __S111 PAGE_RW #define __S111 PAGE_RW
/*
* Segment entry (large page) protection definitions.
*/
#define SEGMENT_NONE __pgprot(_HPAGE_TYPE_NONE)
#define SEGMENT_RO __pgprot(_HPAGE_TYPE_RO)
#define SEGMENT_RW __pgprot(_HPAGE_TYPE_RW)
static inline int mm_exclusive(struct mm_struct *mm) static inline int mm_exclusive(struct mm_struct *mm)
{ {
return likely(mm == current->active_mm && return likely(mm == current->active_mm &&
...@@ -914,26 +921,6 @@ static inline pte_t pte_mkspecial(pte_t pte) ...@@ -914,26 +921,6 @@ static inline pte_t pte_mkspecial(pte_t pte)
#ifdef CONFIG_HUGETLB_PAGE #ifdef CONFIG_HUGETLB_PAGE
static inline pte_t pte_mkhuge(pte_t pte) static inline pte_t pte_mkhuge(pte_t pte)
{ {
/*
* PROT_NONE needs to be remapped from the pte type to the ste type.
* The HW invalid bit is also different for pte and ste. The pte
* invalid bit happens to be the same as the ste _SEGMENT_ENTRY_LARGE
* bit, so we don't have to clear it.
*/
if (pte_val(pte) & _PAGE_INVALID) {
if (pte_val(pte) & _PAGE_SWT)
pte_val(pte) |= _HPAGE_TYPE_NONE;
pte_val(pte) |= _SEGMENT_ENTRY_INV;
}
/*
* Clear SW pte bits, there are no SW bits in a segment table entry.
*/
pte_val(pte) &= ~(_PAGE_SWT | _PAGE_SWX | _PAGE_SWC |
_PAGE_SWR | _PAGE_SWW);
/*
* Also set the change-override bit because we don't need dirty bit
* tracking for hugetlbfs pages.
*/
pte_val(pte) |= (_SEGMENT_ENTRY_LARGE | _SEGMENT_ENTRY_CO); pte_val(pte) |= (_SEGMENT_ENTRY_LARGE | _SEGMENT_ENTRY_CO);
return pte; return pte;
} }
...@@ -1278,31 +1265,7 @@ static inline void __pmd_idte(unsigned long address, pmd_t *pmdp) ...@@ -1278,31 +1265,7 @@ static inline void __pmd_idte(unsigned long address, pmd_t *pmdp)
} }
} }
#ifdef CONFIG_TRANSPARENT_HUGEPAGE #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLB_PAGE)
#define SEGMENT_NONE __pgprot(_HPAGE_TYPE_NONE)
#define SEGMENT_RO __pgprot(_HPAGE_TYPE_RO)
#define SEGMENT_RW __pgprot(_HPAGE_TYPE_RW)
#define __HAVE_ARCH_PGTABLE_DEPOSIT
extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable);
#define __HAVE_ARCH_PGTABLE_WITHDRAW
extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm);
static inline int pmd_trans_splitting(pmd_t pmd)
{
return pmd_val(pmd) & _SEGMENT_ENTRY_SPLIT;
}
static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp, pmd_t entry)
{
if (!(pmd_val(entry) & _SEGMENT_ENTRY_INV) && MACHINE_HAS_EDAT1)
pmd_val(entry) |= _SEGMENT_ENTRY_CO;
*pmdp = entry;
}
static inline unsigned long massage_pgprot_pmd(pgprot_t pgprot) static inline unsigned long massage_pgprot_pmd(pgprot_t pgprot)
{ {
/* /*
...@@ -1323,10 +1286,11 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) ...@@ -1323,10 +1286,11 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
return pmd; return pmd;
} }
static inline pmd_t pmd_mkhuge(pmd_t pmd) static inline pmd_t mk_pmd_phys(unsigned long physpage, pgprot_t pgprot)
{ {
pmd_val(pmd) |= _SEGMENT_ENTRY_LARGE; pmd_t __pmd;
return pmd; pmd_val(__pmd) = physpage + massage_pgprot_pmd(pgprot);
return __pmd;
} }
static inline pmd_t pmd_mkwrite(pmd_t pmd) static inline pmd_t pmd_mkwrite(pmd_t pmd)
...@@ -1336,6 +1300,34 @@ static inline pmd_t pmd_mkwrite(pmd_t pmd) ...@@ -1336,6 +1300,34 @@ static inline pmd_t pmd_mkwrite(pmd_t pmd)
pmd_val(pmd) &= ~_SEGMENT_ENTRY_RO; pmd_val(pmd) &= ~_SEGMENT_ENTRY_RO;
return pmd; return pmd;
} }
#endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLB_PAGE */
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define __HAVE_ARCH_PGTABLE_DEPOSIT
extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable);
#define __HAVE_ARCH_PGTABLE_WITHDRAW
extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm);
static inline int pmd_trans_splitting(pmd_t pmd)
{
return pmd_val(pmd) & _SEGMENT_ENTRY_SPLIT;
}
static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp, pmd_t entry)
{
if (!(pmd_val(entry) & _SEGMENT_ENTRY_INV) && MACHINE_HAS_EDAT1)
pmd_val(entry) |= _SEGMENT_ENTRY_CO;
*pmdp = entry;
}
static inline pmd_t pmd_mkhuge(pmd_t pmd)
{
pmd_val(pmd) |= _SEGMENT_ENTRY_LARGE;
return pmd;
}
static inline pmd_t pmd_wrprotect(pmd_t pmd) static inline pmd_t pmd_wrprotect(pmd_t pmd)
{ {
...@@ -1432,13 +1424,6 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, ...@@ -1432,13 +1424,6 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
} }
} }
static inline pmd_t mk_pmd_phys(unsigned long physpage, pgprot_t pgprot)
{
pmd_t __pmd;
pmd_val(__pmd) = physpage + massage_pgprot_pmd(pgprot);
return __pmd;
}
#define pfn_pmd(pfn, pgprot) mk_pmd_phys(__pa((pfn) << PAGE_SHIFT), (pgprot)) #define pfn_pmd(pfn, pgprot) mk_pmd_phys(__pa((pfn) << PAGE_SHIFT), (pgprot))
#define mk_pmd(page, pgprot) pfn_pmd(page_to_pfn(page), (pgprot)) #define mk_pmd(page, pgprot) pfn_pmd(page_to_pfn(page), (pgprot))
......
...@@ -39,7 +39,7 @@ int arch_prepare_hugepage(struct page *page) ...@@ -39,7 +39,7 @@ int arch_prepare_hugepage(struct page *page)
if (!ptep) if (!ptep)
return -ENOMEM; return -ENOMEM;
pte = mk_pte(page, PAGE_RW); pte_val(pte) = addr;
for (i = 0; i < PTRS_PER_PTE; i++) { for (i = 0; i < PTRS_PER_PTE; i++) {
set_pte_at(&init_mm, addr + i * PAGE_SIZE, ptep + i, pte); set_pte_at(&init_mm, addr + i * PAGE_SIZE, ptep + i, pte);
pte_val(pte) += PAGE_SIZE; pte_val(pte) += PAGE_SIZE;
......
...@@ -42,11 +42,10 @@ pgd_t swapper_pg_dir[PTRS_PER_PGD] __attribute__((__aligned__(PAGE_SIZE))); ...@@ -42,11 +42,10 @@ pgd_t swapper_pg_dir[PTRS_PER_PGD] __attribute__((__aligned__(PAGE_SIZE)));
unsigned long empty_zero_page, zero_page_mask; unsigned long empty_zero_page, zero_page_mask;
EXPORT_SYMBOL(empty_zero_page); EXPORT_SYMBOL(empty_zero_page);
static unsigned long __init setup_zero_pages(void) static void __init setup_zero_pages(void)
{ {
struct cpuid cpu_id; struct cpuid cpu_id;
unsigned int order; unsigned int order;
unsigned long size;
struct page *page; struct page *page;
int i; int i;
...@@ -83,14 +82,11 @@ static unsigned long __init setup_zero_pages(void) ...@@ -83,14 +82,11 @@ static unsigned long __init setup_zero_pages(void)
page = virt_to_page((void *) empty_zero_page); page = virt_to_page((void *) empty_zero_page);
split_page(page, order); split_page(page, order);
for (i = 1 << order; i > 0; i--) { for (i = 1 << order; i > 0; i--) {
SetPageReserved(page); mark_page_reserved(page);
page++; page++;
} }
size = PAGE_SIZE << order; zero_page_mask = ((PAGE_SIZE << order) - 1) & PAGE_MASK;
zero_page_mask = (size - 1) & PAGE_MASK;
return 1UL << order;
} }
/* /*
...@@ -147,7 +143,7 @@ void __init mem_init(void) ...@@ -147,7 +143,7 @@ void __init mem_init(void)
/* this will put all low memory onto the freelists */ /* this will put all low memory onto the freelists */
totalram_pages += free_all_bootmem(); totalram_pages += free_all_bootmem();
totalram_pages -= setup_zero_pages(); /* Setup zeroed pages. */ setup_zero_pages(); /* Setup zeroed pages. */
reservedpages = 0; reservedpages = 0;
...@@ -166,34 +162,15 @@ void __init mem_init(void) ...@@ -166,34 +162,15 @@ void __init mem_init(void)
PFN_ALIGN((unsigned long)&_eshared) - 1); PFN_ALIGN((unsigned long)&_eshared) - 1);
} }
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr = begin;
if (begin >= end)
return;
for (; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)(addr & PAGE_MASK), POISON_FREE_INITMEM,
PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
}
void free_initmem(void) void free_initmem(void)
{ {
free_init_pages("unused kernel memory", free_initmem_default(0);
(unsigned long)&__init_begin,
(unsigned long)&__init_end);
} }
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end) void __init free_initrd_mem(unsigned long start, unsigned long end)
{ {
free_init_pages("initrd memory", start, end); free_reserved_area(start, end, POISON_FREE_INITMEM, "initrd");
} }
#endif #endif
......
...@@ -191,19 +191,16 @@ static void vmem_remove_range(unsigned long start, unsigned long size) ...@@ -191,19 +191,16 @@ static void vmem_remove_range(unsigned long start, unsigned long size)
/* /*
* Add a backed mem_map array to the virtual mem_map array. * Add a backed mem_map array to the virtual mem_map array.
*/ */
int __meminit vmemmap_populate(struct page *start, unsigned long nr, int node) int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
{ {
unsigned long address, start_addr, end_addr; unsigned long address = start;
pgd_t *pg_dir; pgd_t *pg_dir;
pud_t *pu_dir; pud_t *pu_dir;
pmd_t *pm_dir; pmd_t *pm_dir;
pte_t *pt_dir; pte_t *pt_dir;
int ret = -ENOMEM; int ret = -ENOMEM;
start_addr = (unsigned long) start; for (address = start; address < end;) {
end_addr = (unsigned long) (start + nr);
for (address = start_addr; address < end_addr;) {
pg_dir = pgd_offset_k(address); pg_dir = pgd_offset_k(address);
if (pgd_none(*pg_dir)) { if (pgd_none(*pg_dir)) {
pu_dir = vmem_pud_alloc(); pu_dir = vmem_pud_alloc();
...@@ -262,14 +259,14 @@ int __meminit vmemmap_populate(struct page *start, unsigned long nr, int node) ...@@ -262,14 +259,14 @@ int __meminit vmemmap_populate(struct page *start, unsigned long nr, int node)
} }
address += PAGE_SIZE; address += PAGE_SIZE;
} }
memset(start, 0, nr * sizeof(struct page)); memset((void *)start, 0, end - start);
ret = 0; ret = 0;
out: out:
flush_tlb_kernel_range(start_addr, end_addr); flush_tlb_kernel_range(start, end);
return ret; return ret;
} }
void vmemmap_free(struct page *memmap, unsigned long nr_pages) void vmemmap_free(unsigned long start, unsigned long end)
{ {
} }
......
...@@ -43,7 +43,7 @@ EXPORT_SYMBOL_GPL(empty_zero_page); ...@@ -43,7 +43,7 @@ EXPORT_SYMBOL_GPL(empty_zero_page);
static struct kcore_list kcore_mem, kcore_vmalloc; static struct kcore_list kcore_mem, kcore_vmalloc;
static unsigned long setup_zero_page(void) static void setup_zero_page(void)
{ {
struct page *page; struct page *page;
...@@ -52,9 +52,7 @@ static unsigned long setup_zero_page(void) ...@@ -52,9 +52,7 @@ static unsigned long setup_zero_page(void)
panic("Oh boy, that early out of memory?"); panic("Oh boy, that early out of memory?");
page = virt_to_page((void *) empty_zero_page); page = virt_to_page((void *) empty_zero_page);
SetPageReserved(page); mark_page_reserved(page);
return 1UL;
} }
#ifndef CONFIG_NEED_MULTIPLE_NODES #ifndef CONFIG_NEED_MULTIPLE_NODES
...@@ -84,7 +82,7 @@ void __init mem_init(void) ...@@ -84,7 +82,7 @@ void __init mem_init(void)
high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT); high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
totalram_pages += free_all_bootmem(); totalram_pages += free_all_bootmem();
totalram_pages -= setup_zero_page(); /* Setup zeroed pages. */ setup_zero_page(); /* Setup zeroed pages. */
reservedpages = 0; reservedpages = 0;
for (tmp = 0; tmp < max_low_pfn; tmp++) for (tmp = 0; tmp < max_low_pfn; tmp++)
...@@ -109,37 +107,16 @@ void __init mem_init(void) ...@@ -109,37 +107,16 @@ void __init mem_init(void)
} }
#endif /* !CONFIG_NEED_MULTIPLE_NODES */ #endif /* !CONFIG_NEED_MULTIPLE_NODES */
static void free_init_pages(const char *what, unsigned long begin, unsigned long end)
{
unsigned long pfn;
for (pfn = PFN_UP(begin); pfn < PFN_DOWN(end); pfn++) {
struct page *page = pfn_to_page(pfn);
void *addr = phys_to_virt(PFN_PHYS(pfn));
ClearPageReserved(page);
init_page_count(page);
memset(addr, POISON_FREE_INITMEM, PAGE_SIZE);
__free_page(page);
totalram_pages++;
}
printk(KERN_INFO "Freeing %s: %ldk freed\n", what, (end - begin) >> 10);
}
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
free_init_pages("initrd memory", free_reserved_area(start, end, POISON_FREE_INITMEM, "initrd");
virt_to_phys((void *) start),
virt_to_phys((void *) end));
} }
#endif #endif
void __init_refok free_initmem(void) void __init_refok free_initmem(void)
{ {
free_init_pages("unused kernel memory", free_initmem_default(POISON_FREE_INITMEM);
__pa(&__init_begin),
__pa(&__init_end));
} }
unsigned long pgd_current; unsigned long pgd_current;
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm-generic/hugetlb.h>
static inline int is_hugepage_only_range(struct mm_struct *mm, static inline int is_hugepage_only_range(struct mm_struct *mm,
......
...@@ -417,15 +417,13 @@ void __init mem_init(void) ...@@ -417,15 +417,13 @@ void __init mem_init(void)
for_each_online_node(nid) { for_each_online_node(nid) {
pg_data_t *pgdat = NODE_DATA(nid); pg_data_t *pgdat = NODE_DATA(nid);
unsigned long node_pages = 0;
void *node_high_memory; void *node_high_memory;
num_physpages += pgdat->node_present_pages; num_physpages += pgdat->node_present_pages;
if (pgdat->node_spanned_pages) if (pgdat->node_spanned_pages)
node_pages = free_all_bootmem_node(pgdat); totalram_pages += free_all_bootmem_node(pgdat);
totalram_pages += node_pages;
node_high_memory = (void *)__va((pgdat->node_start_pfn + node_high_memory = (void *)__va((pgdat->node_start_pfn +
pgdat->node_spanned_pages) << pgdat->node_spanned_pages) <<
...@@ -501,31 +499,13 @@ void __init mem_init(void) ...@@ -501,31 +499,13 @@ void __init mem_init(void)
void free_initmem(void) void free_initmem(void)
{ {
unsigned long addr; free_initmem_default(0);
addr = (unsigned long)(&__init_begin);
for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk("Freeing unused kernel memory: %ldk freed\n",
((unsigned long)&__init_end -
(unsigned long)&__init_begin) >> 10);
} }
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
unsigned long p; free_reserved_area(start, end, 0, "initrd");
for (p = start; p < end; p += PAGE_SIZE) {
ClearPageReserved(virt_to_page(p));
init_page_count(virt_to_page(p));
free_page(p);
totalram_pages++;
}
printk("Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
} }
#endif #endif
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#define _ASM_SPARC64_HUGETLB_H #define _ASM_SPARC64_HUGETLB_H
#include <asm/page.h> #include <asm/page.h>
#include <asm-generic/hugetlb.h>
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
......
...@@ -282,14 +282,8 @@ static void map_high_region(unsigned long start_pfn, unsigned long end_pfn) ...@@ -282,14 +282,8 @@ static void map_high_region(unsigned long start_pfn, unsigned long end_pfn)
printk("mapping high region %08lx - %08lx\n", start_pfn, end_pfn); printk("mapping high region %08lx - %08lx\n", start_pfn, end_pfn);
#endif #endif
for (tmp = start_pfn; tmp < end_pfn; tmp++) { for (tmp = start_pfn; tmp < end_pfn; tmp++)
struct page *page = pfn_to_page(tmp); free_highmem_page(pfn_to_page(tmp));
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
}
} }
void __init mem_init(void) void __init mem_init(void)
...@@ -347,8 +341,6 @@ void __init mem_init(void) ...@@ -347,8 +341,6 @@ void __init mem_init(void)
map_high_region(start_pfn, end_pfn); map_high_region(start_pfn, end_pfn);
} }
totalram_pages += totalhigh_pages;
codepages = (((unsigned long) &_etext) - ((unsigned long)&_start)); codepages = (((unsigned long) &_etext) - ((unsigned long)&_start));
codepages = PAGE_ALIGN(codepages) >> PAGE_SHIFT; codepages = PAGE_ALIGN(codepages) >> PAGE_SHIFT;
datapages = (((unsigned long) &_edata) - ((unsigned long)&_etext)); datapages = (((unsigned long) &_edata) - ((unsigned long)&_etext));
......
...@@ -2181,10 +2181,9 @@ unsigned long vmemmap_table[VMEMMAP_SIZE]; ...@@ -2181,10 +2181,9 @@ unsigned long vmemmap_table[VMEMMAP_SIZE];
static long __meminitdata addr_start, addr_end; static long __meminitdata addr_start, addr_end;
static int __meminitdata node_start; static int __meminitdata node_start;
int __meminit vmemmap_populate(struct page *start, unsigned long nr, int node) int __meminit vmemmap_populate(unsigned long vstart, unsigned long vend,
int node)
{ {
unsigned long vstart = (unsigned long) start;
unsigned long vend = (unsigned long) (start + nr);
unsigned long phys_start = (vstart - VMEMMAP_BASE); unsigned long phys_start = (vstart - VMEMMAP_BASE);
unsigned long phys_end = (vend - VMEMMAP_BASE); unsigned long phys_end = (vend - VMEMMAP_BASE);
unsigned long addr = phys_start & VMEMMAP_CHUNK_MASK; unsigned long addr = phys_start & VMEMMAP_CHUNK_MASK;
...@@ -2236,7 +2235,7 @@ void __meminit vmemmap_populate_print_last(void) ...@@ -2236,7 +2235,7 @@ void __meminit vmemmap_populate_print_last(void)
} }
} }
void vmemmap_free(struct page *memmap, unsigned long nr_pages) void vmemmap_free(unsigned long start, unsigned long end)
{ {
} }
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#define _ASM_TILE_HUGETLB_H #define _ASM_TILE_HUGETLB_H
#include <asm/page.h> #include <asm/page.h>
#include <asm-generic/hugetlb.h>
static inline int is_hugepage_only_range(struct mm_struct *mm, static inline int is_hugepage_only_range(struct mm_struct *mm,
......
...@@ -592,12 +592,7 @@ void iounmap(volatile void __iomem *addr_in) ...@@ -592,12 +592,7 @@ void iounmap(volatile void __iomem *addr_in)
in parallel. Reuse of the virtual address is prevented by in parallel. Reuse of the virtual address is prevented by
leaving it in the global lists until we're done with it. leaving it in the global lists until we're done with it.
cpa takes care of the direct mappings. */ cpa takes care of the direct mappings. */
read_lock(&vmlist_lock); p = find_vm_area((void *)addr);
for (p = vmlist; p; p = p->next) {
if (p->addr == addr)
break;
}
read_unlock(&vmlist_lock);
if (!p) { if (!p) {
pr_err("iounmap: bad address %p\n", addr); pr_err("iounmap: bad address %p\n", addr);
......
...@@ -42,17 +42,12 @@ static unsigned long brk_end; ...@@ -42,17 +42,12 @@ static unsigned long brk_end;
static void setup_highmem(unsigned long highmem_start, static void setup_highmem(unsigned long highmem_start,
unsigned long highmem_len) unsigned long highmem_len)
{ {
struct page *page;
unsigned long highmem_pfn; unsigned long highmem_pfn;
int i; int i;
highmem_pfn = __pa(highmem_start) >> PAGE_SHIFT; highmem_pfn = __pa(highmem_start) >> PAGE_SHIFT;
for (i = 0; i < highmem_len >> PAGE_SHIFT; i++) { for (i = 0; i < highmem_len >> PAGE_SHIFT; i++)
page = &mem_map[highmem_pfn + i]; free_highmem_page(&mem_map[highmem_pfn + i]);
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
}
} }
#endif #endif
...@@ -73,18 +68,13 @@ void __init mem_init(void) ...@@ -73,18 +68,13 @@ void __init mem_init(void)
totalram_pages = free_all_bootmem(); totalram_pages = free_all_bootmem();
max_low_pfn = totalram_pages; max_low_pfn = totalram_pages;
#ifdef CONFIG_HIGHMEM #ifdef CONFIG_HIGHMEM
totalhigh_pages = highmem >> PAGE_SHIFT; setup_highmem(end_iomem, highmem);
totalram_pages += totalhigh_pages;
#endif #endif
num_physpages = totalram_pages; num_physpages = totalram_pages;
max_pfn = totalram_pages; max_pfn = totalram_pages;
printk(KERN_INFO "Memory: %luk available\n", printk(KERN_INFO "Memory: %luk available\n",
nr_free_pages() << (PAGE_SHIFT-10)); nr_free_pages() << (PAGE_SHIFT-10));
kmalloc_ok = 1; kmalloc_ok = 1;
#ifdef CONFIG_HIGHMEM
setup_highmem(end_iomem, highmem);
#endif
} }
/* /*
...@@ -254,15 +244,7 @@ void free_initmem(void) ...@@ -254,15 +244,7 @@ void free_initmem(void)
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
if (start < end) free_reserved_area(start, end, 0, "initrd");
printk(KERN_INFO "Freeing initrd memory: %ldk freed\n",
(end - start) >> 10);
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
}
} }
#endif #endif
......
...@@ -66,6 +66,9 @@ void show_mem(unsigned int filter) ...@@ -66,6 +66,9 @@ void show_mem(unsigned int filter)
printk(KERN_DEFAULT "Mem-info:\n"); printk(KERN_DEFAULT "Mem-info:\n");
show_free_areas(filter); show_free_areas(filter);
if (filter & SHOW_MEM_FILTER_PAGE_COUNT)
return;
for_each_bank(i, mi) { for_each_bank(i, mi) {
struct membank *bank = &mi->bank[i]; struct membank *bank = &mi->bank[i];
unsigned int pfn1, pfn2; unsigned int pfn1, pfn2;
...@@ -313,24 +316,6 @@ void __init bootmem_init(void) ...@@ -313,24 +316,6 @@ void __init bootmem_init(void)
max_pfn = max_high - PHYS_PFN_OFFSET; max_pfn = max_high - PHYS_PFN_OFFSET;
} }
static inline int free_area(unsigned long pfn, unsigned long end, char *s)
{
unsigned int pages = 0, size = (end - pfn) << (PAGE_SHIFT - 10);
for (; pfn < end; pfn++) {
struct page *page = pfn_to_page(pfn);
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
pages++;
}
if (size && s)
printk(KERN_INFO "Freeing %s memory: %dK\n", s, size);
return pages;
}
static inline void static inline void
free_memmap(unsigned long start_pfn, unsigned long end_pfn) free_memmap(unsigned long start_pfn, unsigned long end_pfn)
{ {
...@@ -404,9 +389,9 @@ void __init mem_init(void) ...@@ -404,9 +389,9 @@ void __init mem_init(void)
max_mapnr = pfn_to_page(max_pfn + PHYS_PFN_OFFSET) - mem_map; max_mapnr = pfn_to_page(max_pfn + PHYS_PFN_OFFSET) - mem_map;
/* this will put all unused low memory onto the freelists */
free_unused_memmap(&meminfo); free_unused_memmap(&meminfo);
/* this will put all unused low memory onto the freelists */
totalram_pages += free_all_bootmem(); totalram_pages += free_all_bootmem();
reserved_pages = free_pages = 0; reserved_pages = free_pages = 0;
...@@ -491,9 +476,7 @@ void __init mem_init(void) ...@@ -491,9 +476,7 @@ void __init mem_init(void)
void free_initmem(void) void free_initmem(void)
{ {
totalram_pages += free_area(__phys_to_pfn(__pa(__init_begin)), free_initmem_default(0);
__phys_to_pfn(__pa(__init_end)),
"init");
} }
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
...@@ -503,9 +486,7 @@ static int keep_initrd; ...@@ -503,9 +486,7 @@ static int keep_initrd;
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
if (!keep_initrd) if (!keep_initrd)
totalram_pages += free_area(__phys_to_pfn(__pa(start)), free_reserved_area(start, end, 0, "initrd");
__phys_to_pfn(__pa(end)),
"initrd");
} }
static int __init keepinitrd_setup(char *__unused) static int __init keepinitrd_setup(char *__unused)
......
...@@ -235,7 +235,7 @@ EXPORT_SYMBOL(__uc32_ioremap_cached); ...@@ -235,7 +235,7 @@ EXPORT_SYMBOL(__uc32_ioremap_cached);
void __uc32_iounmap(volatile void __iomem *io_addr) void __uc32_iounmap(volatile void __iomem *io_addr)
{ {
void *addr = (void *)(PAGE_MASK & (unsigned long)io_addr); void *addr = (void *)(PAGE_MASK & (unsigned long)io_addr);
struct vm_struct **p, *tmp; struct vm_struct *vm;
/* /*
* If this is a section based mapping we need to handle it * If this is a section based mapping we need to handle it
...@@ -244,17 +244,10 @@ void __uc32_iounmap(volatile void __iomem *io_addr) ...@@ -244,17 +244,10 @@ void __uc32_iounmap(volatile void __iomem *io_addr)
* all the mappings before the area can be reclaimed * all the mappings before the area can be reclaimed
* by someone else. * by someone else.
*/ */
write_lock(&vmlist_lock); vm = find_vm_area(addr);
for (p = &vmlist ; (tmp = *p) ; p = &tmp->next) { if (vm && (vm->flags & VM_IOREMAP) &&
if ((tmp->flags & VM_IOREMAP) && (tmp->addr == addr)) { (vm->flags & VM_UNICORE_SECTION_MAPPING))
if (tmp->flags & VM_UNICORE_SECTION_MAPPING) { unmap_area_sections((unsigned long)vm->addr, vm->size);
unmap_area_sections((unsigned long)tmp->addr,
tmp->size);
}
break;
}
}
write_unlock(&vmlist_lock);
vunmap(addr); vunmap(addr);
} }
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#define _ASM_X86_HUGETLB_H #define _ASM_X86_HUGETLB_H
#include <asm/page.h> #include <asm/page.h>
#include <asm-generic/hugetlb.h>
static inline int is_hugepage_only_range(struct mm_struct *mm, static inline int is_hugepage_only_range(struct mm_struct *mm,
......
...@@ -43,10 +43,10 @@ obj-$(CONFIG_MTRR) += mtrr/ ...@@ -43,10 +43,10 @@ obj-$(CONFIG_MTRR) += mtrr/
obj-$(CONFIG_X86_LOCAL_APIC) += perfctr-watchdog.o perf_event_amd_ibs.o obj-$(CONFIG_X86_LOCAL_APIC) += perfctr-watchdog.o perf_event_amd_ibs.o
quiet_cmd_mkcapflags = MKCAP $@ quiet_cmd_mkcapflags = MKCAP $@
cmd_mkcapflags = $(PERL) $(srctree)/$(src)/mkcapflags.pl $< $@ cmd_mkcapflags = $(CONFIG_SHELL) $(srctree)/$(src)/mkcapflags.sh $< $@
cpufeature = $(src)/../../include/asm/cpufeature.h cpufeature = $(src)/../../include/asm/cpufeature.h
targets += capflags.c targets += capflags.c
$(obj)/capflags.c: $(cpufeature) $(src)/mkcapflags.pl FORCE $(obj)/capflags.c: $(cpufeature) $(src)/mkcapflags.sh FORCE
$(call if_changed,mkcapflags) $(call if_changed,mkcapflags)
#!/usr/bin/perl -w
#
# Generate the x86_cap_flags[] array from include/asm-x86/cpufeature.h
#
($in, $out) = @ARGV;
open(IN, "< $in\0") or die "$0: cannot open: $in: $!\n";
open(OUT, "> $out\0") or die "$0: cannot create: $out: $!\n";
print OUT "#ifndef _ASM_X86_CPUFEATURE_H\n";
print OUT "#include <asm/cpufeature.h>\n";
print OUT "#endif\n";
print OUT "\n";
print OUT "const char * const x86_cap_flags[NCAPINTS*32] = {\n";
%features = ();
$err = 0;
while (defined($line = <IN>)) {
if ($line =~ /^\s*\#\s*define\s+(X86_FEATURE_(\S+))\s+(.*)$/) {
$macro = $1;
$feature = "\L$2";
$tail = $3;
if ($tail =~ /\/\*\s*\"([^"]*)\".*\*\//) {
$feature = "\L$1";
}
next if ($feature eq '');
if ($features{$feature}++) {
print STDERR "$in: duplicate feature name: $feature\n";
$err++;
}
printf OUT "\t%-32s = \"%s\",\n", "[$macro]", $feature;
}
}
print OUT "};\n";
close(IN);
close(OUT);
if ($err) {
unlink($out);
exit(1);
}
exit(0);
#!/bin/sh
#
# Generate the x86_cap_flags[] array from include/asm/cpufeature.h
#
IN=$1
OUT=$2
TABS="$(printf '\t\t\t\t\t')"
trap 'rm "$OUT"' EXIT
(
echo "#ifndef _ASM_X86_CPUFEATURE_H"
echo "#include <asm/cpufeature.h>"
echo "#endif"
echo ""
echo "const char * const x86_cap_flags[NCAPINTS*32] = {"
# Iterate through any input lines starting with #define X86_FEATURE_
sed -n -e 's/\t/ /g' -e 's/^ *# *define *X86_FEATURE_//p' $IN |
while read i
do
# Name is everything up to the first whitespace
NAME="$(echo "$i" | sed 's/ .*//')"
# If the /* comment */ starts with a quote string, grab that.
VALUE="$(echo "$i" | sed -n 's@.*/\* *\("[^"]*"\).*\*/@\1@p')"
[ -z "$VALUE" ] && VALUE="\"$NAME\""
[ "$VALUE" == '""' ] && continue
# Name is uppercase, VALUE is all lowercase
VALUE="$(echo "$VALUE" | tr A-Z a-z)"
TABCOUNT=$(( ( 5*8 - 14 - $(echo "$NAME" | wc -c) ) / 8 ))
printf "\t[%s]%.*s = %s,\n" \
"X86_FEATURE_$NAME" "$TABCOUNT" "$TABS" "$VALUE"
done
echo "};"
) > $OUT
trap - EXIT
...@@ -137,5 +137,4 @@ void __init set_highmem_pages_init(void) ...@@ -137,5 +137,4 @@ void __init set_highmem_pages_init(void)
add_highpages_with_active_regions(nid, zone_start_pfn, add_highpages_with_active_regions(nid, zone_start_pfn,
zone_end_pfn); zone_end_pfn);
} }
totalram_pages += totalhigh_pages;
} }
...@@ -515,11 +515,8 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end) ...@@ -515,11 +515,8 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10); printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
for (; addr < end; addr += PAGE_SIZE) { for (; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE); memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
free_page(addr); free_reserved_page(virt_to_page(addr));
totalram_pages++;
} }
#endif #endif
} }
......
...@@ -427,14 +427,6 @@ static void __init permanent_kmaps_init(pgd_t *pgd_base) ...@@ -427,14 +427,6 @@ static void __init permanent_kmaps_init(pgd_t *pgd_base)
pkmap_page_table = pte; pkmap_page_table = pte;
} }
static void __init add_one_highpage_init(struct page *page)
{
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
}
void __init add_highpages_with_active_regions(int nid, void __init add_highpages_with_active_regions(int nid,
unsigned long start_pfn, unsigned long end_pfn) unsigned long start_pfn, unsigned long end_pfn)
{ {
...@@ -448,7 +440,7 @@ void __init add_highpages_with_active_regions(int nid, ...@@ -448,7 +440,7 @@ void __init add_highpages_with_active_regions(int nid,
start_pfn, end_pfn); start_pfn, end_pfn);
for ( ; pfn < e_pfn; pfn++) for ( ; pfn < e_pfn; pfn++)
if (pfn_valid(pfn)) if (pfn_valid(pfn))
add_one_highpage_init(pfn_to_page(pfn)); free_highmem_page(pfn_to_page(pfn));
} }
} }
#else #else
......
...@@ -1011,11 +1011,8 @@ remove_pagetable(unsigned long start, unsigned long end, bool direct) ...@@ -1011,11 +1011,8 @@ remove_pagetable(unsigned long start, unsigned long end, bool direct)
flush_tlb_all(); flush_tlb_all();
} }
void __ref vmemmap_free(struct page *memmap, unsigned long nr_pages) void __ref vmemmap_free(unsigned long start, unsigned long end)
{ {
unsigned long start = (unsigned long)memmap;
unsigned long end = (unsigned long)(memmap + nr_pages);
remove_pagetable(start, end, false); remove_pagetable(start, end, false);
} }
...@@ -1067,10 +1064,9 @@ void __init mem_init(void) ...@@ -1067,10 +1064,9 @@ void __init mem_init(void)
/* clear_bss() already clear the empty_zero_page */ /* clear_bss() already clear the empty_zero_page */
reservedpages = 0;
/* this will put all low memory onto the freelists */
register_page_bootmem_info(); register_page_bootmem_info();
/* this will put all memory onto the freelists */
totalram_pages = free_all_bootmem(); totalram_pages = free_all_bootmem();
absent_pages = absent_pages_in_range(0, max_pfn); absent_pages = absent_pages_in_range(0, max_pfn);
...@@ -1285,18 +1281,17 @@ static long __meminitdata addr_start, addr_end; ...@@ -1285,18 +1281,17 @@ static long __meminitdata addr_start, addr_end;
static void __meminitdata *p_start, *p_end; static void __meminitdata *p_start, *p_end;
static int __meminitdata node_start; static int __meminitdata node_start;
int __meminit static int __meminit vmemmap_populate_hugepages(unsigned long start,
vmemmap_populate(struct page *start_page, unsigned long size, int node) unsigned long end, int node)
{ {
unsigned long addr = (unsigned long)start_page; unsigned long addr;
unsigned long end = (unsigned long)(start_page + size);
unsigned long next; unsigned long next;
pgd_t *pgd; pgd_t *pgd;
pud_t *pud; pud_t *pud;
pmd_t *pmd; pmd_t *pmd;
for (; addr < end; addr = next) { for (addr = start; addr < end; addr = next) {
void *p = NULL; next = pmd_addr_end(addr, end);
pgd = vmemmap_pgd_populate(addr, node); pgd = vmemmap_pgd_populate(addr, node);
if (!pgd) if (!pgd)
...@@ -1306,31 +1301,14 @@ vmemmap_populate(struct page *start_page, unsigned long size, int node) ...@@ -1306,31 +1301,14 @@ vmemmap_populate(struct page *start_page, unsigned long size, int node)
if (!pud) if (!pud)
return -ENOMEM; return -ENOMEM;
if (!cpu_has_pse) { pmd = pmd_offset(pud, addr);
next = (addr + PAGE_SIZE) & PAGE_MASK; if (pmd_none(*pmd)) {
pmd = vmemmap_pmd_populate(pud, addr, node); void *p;
if (!pmd)
return -ENOMEM;
p = vmemmap_pte_populate(pmd, addr, node);
if (!p) p = vmemmap_alloc_block_buf(PMD_SIZE, node);
return -ENOMEM; if (p) {
addr_end = addr + PAGE_SIZE;
p_end = p + PAGE_SIZE;
} else {
next = pmd_addr_end(addr, end);
pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd)) {
pte_t entry; pte_t entry;
p = vmemmap_alloc_block_buf(PMD_SIZE, node);
if (!p)
return -ENOMEM;
entry = pfn_pte(__pa(p) >> PAGE_SHIFT, entry = pfn_pte(__pa(p) >> PAGE_SHIFT,
PAGE_KERNEL_LARGE); PAGE_KERNEL_LARGE);
set_pmd(pmd, __pmd(pte_val(entry))); set_pmd(pmd, __pmd(pte_val(entry)));
...@@ -1347,15 +1325,32 @@ vmemmap_populate(struct page *start_page, unsigned long size, int node) ...@@ -1347,15 +1325,32 @@ vmemmap_populate(struct page *start_page, unsigned long size, int node)
addr_end = addr + PMD_SIZE; addr_end = addr + PMD_SIZE;
p_end = p + PMD_SIZE; p_end = p + PMD_SIZE;
} else continue;
vmemmap_verify((pte_t *)pmd, node, addr, next); }
} else if (pmd_large(*pmd)) {
vmemmap_verify((pte_t *)pmd, node, addr, next);
continue;
} }
pr_warn_once("vmemmap: falling back to regular page backing\n");
if (vmemmap_populate_basepages(addr, next, node))
return -ENOMEM;
} }
sync_global_pgds((unsigned long)start_page, end - 1);
return 0; return 0;
} }
int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
{
int err;
if (cpu_has_pse)
err = vmemmap_populate_hugepages(start, end, node);
else
err = vmemmap_populate_basepages(start, end, node);
if (!err)
sync_global_pgds(start, end - 1);
return err;
}
#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE) #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE)
void register_page_bootmem_memmap(unsigned long section_nr, void register_page_bootmem_memmap(unsigned long section_nr,
struct page *start_page, unsigned long size) struct page *start_page, unsigned long size)
......
...@@ -282,12 +282,7 @@ void iounmap(volatile void __iomem *addr) ...@@ -282,12 +282,7 @@ void iounmap(volatile void __iomem *addr)
in parallel. Reuse of the virtual address is prevented by in parallel. Reuse of the virtual address is prevented by
leaving it in the global lists until we're done with it. leaving it in the global lists until we're done with it.
cpa takes care of the direct mappings. */ cpa takes care of the direct mappings. */
read_lock(&vmlist_lock); p = find_vm_area((void __force *)addr);
for (p = vmlist; p; p = p->next) {
if (p->addr == (void __force *)addr)
break;
}
read_unlock(&vmlist_lock);
if (!p) { if (!p) {
printk(KERN_ERR "iounmap: bad address %p\n", addr); printk(KERN_ERR "iounmap: bad address %p\n", addr);
......
...@@ -114,14 +114,11 @@ void numa_clear_node(int cpu) ...@@ -114,14 +114,11 @@ void numa_clear_node(int cpu)
*/ */
void __init setup_node_to_cpumask_map(void) void __init setup_node_to_cpumask_map(void)
{ {
unsigned int node, num = 0; unsigned int node;
/* setup nr_node_ids if not done yet */ /* setup nr_node_ids if not done yet */
if (nr_node_ids == MAX_NUMNODES) { if (nr_node_ids == MAX_NUMNODES)
for_each_node_mask(node, node_possible_map) setup_nr_node_ids();
num = node;
nr_node_ids = num + 1;
}
/* allocate the map */ /* allocate the map */
for (node = 0; node < nr_node_ids; node++) for (node = 0; node < nr_node_ids; node++)
......
...@@ -208,32 +208,17 @@ void __init mem_init(void) ...@@ -208,32 +208,17 @@ void __init mem_init(void)
highmemsize >> 10); highmemsize >> 10);
} }
void
free_reserved_mem(void *start, void *end)
{
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page((unsigned long)start);
totalram_pages++;
}
}
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
extern int initrd_is_mapped; extern int initrd_is_mapped;
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {
if (initrd_is_mapped) { if (initrd_is_mapped)
free_reserved_mem((void*)start, (void*)end); free_reserved_area(start, end, 0, "initrd");
printk ("Freeing initrd memory: %ldk freed\n",(end-start)>>10);
}
} }
#endif #endif
void free_initmem(void) void free_initmem(void)
{ {
free_reserved_mem(__init_begin, __init_end); free_initmem_default(0);
printk("Freeing unused kernel memory: %zuk freed\n",
(__init_end - __init_begin) >> 10);
} }
...@@ -25,6 +25,15 @@ EXPORT_SYMBOL_GPL(cpu_subsys); ...@@ -25,6 +25,15 @@ EXPORT_SYMBOL_GPL(cpu_subsys);
static DEFINE_PER_CPU(struct device *, cpu_sys_devices); static DEFINE_PER_CPU(struct device *, cpu_sys_devices);
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
static void change_cpu_under_node(struct cpu *cpu,
unsigned int from_nid, unsigned int to_nid)
{
int cpuid = cpu->dev.id;
unregister_cpu_under_node(cpuid, from_nid);
register_cpu_under_node(cpuid, to_nid);
cpu->node_id = to_nid;
}
static ssize_t show_online(struct device *dev, static ssize_t show_online(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
char *buf) char *buf)
...@@ -39,17 +48,29 @@ static ssize_t __ref store_online(struct device *dev, ...@@ -39,17 +48,29 @@ static ssize_t __ref store_online(struct device *dev,
const char *buf, size_t count) const char *buf, size_t count)
{ {
struct cpu *cpu = container_of(dev, struct cpu, dev); struct cpu *cpu = container_of(dev, struct cpu, dev);
int cpuid = cpu->dev.id;
int from_nid, to_nid;
ssize_t ret; ssize_t ret;
cpu_hotplug_driver_lock(); cpu_hotplug_driver_lock();
switch (buf[0]) { switch (buf[0]) {
case '0': case '0':
ret = cpu_down(cpu->dev.id); ret = cpu_down(cpuid);
if (!ret) if (!ret)
kobject_uevent(&dev->kobj, KOBJ_OFFLINE); kobject_uevent(&dev->kobj, KOBJ_OFFLINE);
break; break;
case '1': case '1':
ret = cpu_up(cpu->dev.id); from_nid = cpu_to_node(cpuid);
ret = cpu_up(cpuid);
/*
* When hot adding memory to memoryless node and enabling a cpu
* on the node, node number of the cpu may internally change.
*/
to_nid = cpu_to_node(cpuid);
if (from_nid != to_nid)
change_cpu_under_node(cpu, from_nid, to_nid);
if (!ret) if (!ret)
kobject_uevent(&dev->kobj, KOBJ_ONLINE); kobject_uevent(&dev->kobj, KOBJ_ONLINE);
break; break;
......
...@@ -93,16 +93,6 @@ int register_memory(struct memory_block *memory) ...@@ -93,16 +93,6 @@ int register_memory(struct memory_block *memory)
return error; return error;
} }
static void
unregister_memory(struct memory_block *memory)
{
BUG_ON(memory->dev.bus != &memory_subsys);
/* drop the ref. we got in remove_memory_block() */
kobject_put(&memory->dev.kobj);
device_unregister(&memory->dev);
}
unsigned long __weak memory_block_size_bytes(void) unsigned long __weak memory_block_size_bytes(void)
{ {
return MIN_MEMORY_BLOCK_SIZE; return MIN_MEMORY_BLOCK_SIZE;
...@@ -217,8 +207,7 @@ int memory_isolate_notify(unsigned long val, void *v) ...@@ -217,8 +207,7 @@ int memory_isolate_notify(unsigned long val, void *v)
* The probe routines leave the pages reserved, just as the bootmem code does. * The probe routines leave the pages reserved, just as the bootmem code does.
* Make sure they're still that way. * Make sure they're still that way.
*/ */
static bool pages_correctly_reserved(unsigned long start_pfn, static bool pages_correctly_reserved(unsigned long start_pfn)
unsigned long nr_pages)
{ {
int i, j; int i, j;
struct page *page; struct page *page;
...@@ -266,7 +255,7 @@ memory_block_action(unsigned long phys_index, unsigned long action, int online_t ...@@ -266,7 +255,7 @@ memory_block_action(unsigned long phys_index, unsigned long action, int online_t
switch (action) { switch (action) {
case MEM_ONLINE: case MEM_ONLINE:
if (!pages_correctly_reserved(start_pfn, nr_pages)) if (!pages_correctly_reserved(start_pfn))
return -EBUSY; return -EBUSY;
ret = online_pages(start_pfn, nr_pages, online_type); ret = online_pages(start_pfn, nr_pages, online_type);
...@@ -637,8 +626,28 @@ static int add_memory_section(int nid, struct mem_section *section, ...@@ -637,8 +626,28 @@ static int add_memory_section(int nid, struct mem_section *section,
return ret; return ret;
} }
int remove_memory_block(unsigned long node_id, struct mem_section *section, /*
int phys_device) * need an interface for the VM to add new memory regions,
* but without onlining it.
*/
int register_new_memory(int nid, struct mem_section *section)
{
return add_memory_section(nid, section, NULL, MEM_OFFLINE, HOTPLUG);
}
#ifdef CONFIG_MEMORY_HOTREMOVE
static void
unregister_memory(struct memory_block *memory)
{
BUG_ON(memory->dev.bus != &memory_subsys);
/* drop the ref. we got in remove_memory_block() */
kobject_put(&memory->dev.kobj);
device_unregister(&memory->dev);
}
static int remove_memory_block(unsigned long node_id,
struct mem_section *section, int phys_device)
{ {
struct memory_block *mem; struct memory_block *mem;
...@@ -661,15 +670,6 @@ int remove_memory_block(unsigned long node_id, struct mem_section *section, ...@@ -661,15 +670,6 @@ int remove_memory_block(unsigned long node_id, struct mem_section *section,
return 0; return 0;
} }
/*
* need an interface for the VM to add new memory regions,
* but without onlining it.
*/
int register_new_memory(int nid, struct mem_section *section)
{
return add_memory_section(nid, section, NULL, MEM_OFFLINE, HOTPLUG);
}
int unregister_memory_section(struct mem_section *section) int unregister_memory_section(struct mem_section *section)
{ {
if (!present_section(section)) if (!present_section(section))
...@@ -677,6 +677,7 @@ int unregister_memory_section(struct mem_section *section) ...@@ -677,6 +677,7 @@ int unregister_memory_section(struct mem_section *section)
return remove_memory_block(0, section, 0); return remove_memory_block(0, section, 0);
} }
#endif /* CONFIG_MEMORY_HOTREMOVE */
/* /*
* offline one memory block. If the memory block has been offlined, do nothing. * offline one memory block. If the memory block has been offlined, do nothing.
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/memory.h> #include <linux/memory.h>
#include <linux/vmstat.h> #include <linux/vmstat.h>
#include <linux/notifier.h>
#include <linux/node.h> #include <linux/node.h>
#include <linux/hugetlb.h> #include <linux/hugetlb.h>
#include <linux/compaction.h> #include <linux/compaction.h>
...@@ -683,8 +684,11 @@ static int __init register_node_type(void) ...@@ -683,8 +684,11 @@ static int __init register_node_type(void)
ret = subsys_system_register(&node_subsys, cpu_root_attr_groups); ret = subsys_system_register(&node_subsys, cpu_root_attr_groups);
if (!ret) { if (!ret) {
hotplug_memory_notifier(node_memory_callback, static struct notifier_block node_memory_callback_nb = {
NODE_CALLBACK_PRI); .notifier_call = node_memory_callback,
.priority = NODE_CALLBACK_PRI,
};
register_hotmemory_notifier(&node_memory_callback_nb);
} }
/* /*
......
...@@ -114,12 +114,9 @@ static void __meminit release_firmware_map_entry(struct kobject *kobj) ...@@ -114,12 +114,9 @@ static void __meminit release_firmware_map_entry(struct kobject *kobj)
* map_entries_bootmem here, and deleted from &map_entries in * map_entries_bootmem here, and deleted from &map_entries in
* firmware_map_remove_entry(). * firmware_map_remove_entry().
*/ */
if (firmware_map_find_entry(entry->start, entry->end, spin_lock(&map_entries_bootmem_lock);
entry->type)) { list_add(&entry->list, &map_entries_bootmem);
spin_lock(&map_entries_bootmem_lock); spin_unlock(&map_entries_bootmem_lock);
list_add(&entry->list, &map_entries_bootmem);
spin_unlock(&map_entries_bootmem_lock);
}
return; return;
} }
......
...@@ -2440,6 +2440,15 @@ config FB_PUV3_UNIGFX ...@@ -2440,6 +2440,15 @@ config FB_PUV3_UNIGFX
Choose this option if you want to use the Unigfx device as a Choose this option if you want to use the Unigfx device as a
framebuffer device. Without the support of PCI & AGP. framebuffer device. Without the support of PCI & AGP.
config FB_HYPERV
tristate "Microsoft Hyper-V Synthetic Video support"
depends on FB && HYPERV
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
help
This framebuffer driver supports Microsoft Hyper-V Synthetic Video.
source "drivers/video/omap/Kconfig" source "drivers/video/omap/Kconfig"
source "drivers/video/omap2/Kconfig" source "drivers/video/omap2/Kconfig"
source "drivers/video/exynos/Kconfig" source "drivers/video/exynos/Kconfig"
......
...@@ -149,6 +149,7 @@ obj-$(CONFIG_FB_MSM) += msm/ ...@@ -149,6 +149,7 @@ obj-$(CONFIG_FB_MSM) += msm/
obj-$(CONFIG_FB_NUC900) += nuc900fb.o obj-$(CONFIG_FB_NUC900) += nuc900fb.o
obj-$(CONFIG_FB_JZ4740) += jz4740_fb.o obj-$(CONFIG_FB_JZ4740) += jz4740_fb.o
obj-$(CONFIG_FB_PUV3_UNIGFX) += fb-puv3.o obj-$(CONFIG_FB_PUV3_UNIGFX) += fb-puv3.o
obj-$(CONFIG_FB_HYPERV) += hyperv_fb.o
# Platform or fallback drivers go here # Platform or fallback drivers go here
obj-$(CONFIG_FB_UVESA) += uvesafb.o obj-$(CONFIG_FB_UVESA) += uvesafb.o
......
...@@ -27,7 +27,7 @@ static void cw_update_attr(u8 *dst, u8 *src, int attribute, ...@@ -27,7 +27,7 @@ static void cw_update_attr(u8 *dst, u8 *src, int attribute,
{ {
int i, j, offset = (vc->vc_font.height < 10) ? 1 : 2; int i, j, offset = (vc->vc_font.height < 10) ? 1 : 2;
int width = (vc->vc_font.height + 7) >> 3; int width = (vc->vc_font.height + 7) >> 3;
u8 c, t = 0, msk = ~(0xff >> offset); u8 c, msk = ~(0xff >> offset);
for (i = 0; i < vc->vc_font.width; i++) { for (i = 0; i < vc->vc_font.width; i++) {
for (j = 0; j < width; j++) { for (j = 0; j < width; j++) {
...@@ -40,7 +40,6 @@ static void cw_update_attr(u8 *dst, u8 *src, int attribute, ...@@ -40,7 +40,6 @@ static void cw_update_attr(u8 *dst, u8 *src, int attribute,
c = ~c; c = ~c;
src++; src++;
*dst++ = c; *dst++ = c;
t = c;
} }
} }
} }
......
...@@ -419,7 +419,7 @@ static struct fb_ops ep93xxfb_ops = { ...@@ -419,7 +419,7 @@ static struct fb_ops ep93xxfb_ops = {
.fb_mmap = ep93xxfb_mmap, .fb_mmap = ep93xxfb_mmap,
}; };
static int __init ep93xxfb_calc_fbsize(struct ep93xxfb_mach_info *mach_info) static int ep93xxfb_calc_fbsize(struct ep93xxfb_mach_info *mach_info)
{ {
int i, fb_size = 0; int i, fb_size = 0;
...@@ -441,7 +441,7 @@ static int __init ep93xxfb_calc_fbsize(struct ep93xxfb_mach_info *mach_info) ...@@ -441,7 +441,7 @@ static int __init ep93xxfb_calc_fbsize(struct ep93xxfb_mach_info *mach_info)
return fb_size; return fb_size;
} }
static int __init ep93xxfb_alloc_videomem(struct fb_info *info) static int ep93xxfb_alloc_videomem(struct fb_info *info)
{ {
struct ep93xx_fbi *fbi = info->par; struct ep93xx_fbi *fbi = info->par;
char __iomem *virt_addr; char __iomem *virt_addr;
...@@ -627,19 +627,7 @@ static struct platform_driver ep93xxfb_driver = { ...@@ -627,19 +627,7 @@ static struct platform_driver ep93xxfb_driver = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
}, },
}; };
module_platform_driver(ep93xxfb_driver);
static int ep93xxfb_init(void)
{
return platform_driver_register(&ep93xxfb_driver);
}
static void __exit ep93xxfb_exit(void)
{
platform_driver_unregister(&ep93xxfb_driver);
}
module_init(ep93xxfb_init);
module_exit(ep93xxfb_exit);
MODULE_DESCRIPTION("EP93XX Framebuffer Driver"); MODULE_DESCRIPTION("EP93XX Framebuffer Driver");
MODULE_ALIAS("platform:ep93xx-fb"); MODULE_ALIAS("platform:ep93xx-fb");
......
...@@ -32,6 +32,7 @@ ...@@ -32,6 +32,7 @@
#include <linux/notifier.h> #include <linux/notifier.h>
#include <linux/regulator/consumer.h> #include <linux/regulator/consumer.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/err.h>
#include <video/exynos_mipi_dsim.h> #include <video/exynos_mipi_dsim.h>
...@@ -382,10 +383,9 @@ static int exynos_mipi_dsi_probe(struct platform_device *pdev) ...@@ -382,10 +383,9 @@ static int exynos_mipi_dsi_probe(struct platform_device *pdev)
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
dsim->reg_base = devm_request_and_ioremap(&pdev->dev, res); dsim->reg_base = devm_ioremap_resource(&pdev->dev, res);
if (!dsim->reg_base) { if (IS_ERR(dsim->reg_base)) {
dev_err(&pdev->dev, "failed to remap io region\n"); ret = PTR_ERR(dsim->reg_base);
ret = -ENOMEM;
goto error; goto error;
} }
......
此差异已折叠。
...@@ -137,8 +137,20 @@ static int* get_ctrl_ptr(struct maven_data* md, int idx) { ...@@ -137,8 +137,20 @@ static int* get_ctrl_ptr(struct maven_data* md, int idx) {
static int maven_get_reg(struct i2c_client* c, char reg) { static int maven_get_reg(struct i2c_client* c, char reg) {
char dst; char dst;
struct i2c_msg msgs[] = {{ c->addr, I2C_M_REV_DIR_ADDR, sizeof(reg), &reg }, struct i2c_msg msgs[] = {
{ c->addr, I2C_M_RD | I2C_M_NOSTART, sizeof(dst), &dst }}; {
.addr = c->addr,
.flags = I2C_M_REV_DIR_ADDR,
.len = sizeof(reg),
.buf = &reg
},
{
.addr = c->addr,
.flags = I2C_M_RD | I2C_M_NOSTART,
.len = sizeof(dst),
.buf = &dst
}
};
s32 err; s32 err;
err = i2c_transfer(c->adapter, msgs, 2); err = i2c_transfer(c->adapter, msgs, 2);
......
...@@ -960,57 +960,8 @@ struct lcd_regs { ...@@ -960,57 +960,8 @@ struct lcd_regs {
#define gra_partdisp_ctrl_ver(id) ((id) ? (((id) & 1) ? \ #define gra_partdisp_ctrl_ver(id) ((id) ? (((id) & 1) ? \
LCD_TVG_CUTVLN : PN2_LCD_GRA_CUTVLN) : LCD_GRA_CUTVLN) LCD_TVG_CUTVLN : PN2_LCD_GRA_CUTVLN) : LCD_GRA_CUTVLN)
/*
* defined Video Memory Color format for DMA control 0 register
* DMA0 bit[23:20]
*/
#define VMODE_RGB565 0x0
#define VMODE_RGB1555 0x1
#define VMODE_RGB888PACKED 0x2
#define VMODE_RGB888UNPACKED 0x3
#define VMODE_RGBA888 0x4
#define VMODE_YUV422PACKED 0x5
#define VMODE_YUV422PLANAR 0x6
#define VMODE_YUV420PLANAR 0x7
#define VMODE_SMPNCMD 0x8
#define VMODE_PALETTE4BIT 0x9
#define VMODE_PALETTE8BIT 0xa
#define VMODE_RESERVED 0xb
/*
* defined Graphic Memory Color format for DMA control 0 register
* DMA0 bit[19:16]
*/
#define GMODE_RGB565 0x0
#define GMODE_RGB1555 0x1
#define GMODE_RGB888PACKED 0x2
#define GMODE_RGB888UNPACKED 0x3
#define GMODE_RGBA888 0x4
#define GMODE_YUV422PACKED 0x5
#define GMODE_YUV422PLANAR 0x6
#define GMODE_YUV420PLANAR 0x7
#define GMODE_SMPNCMD 0x8
#define GMODE_PALETTE4BIT 0x9
#define GMODE_PALETTE8BIT 0xa
#define GMODE_RESERVED 0xb
/*
* define for DMA control 1 register
*/
#define DMA1_FRAME_TRIG 31 /* bit location */
#define DMA1_VSYNC_MODE 28
#define DMA1_VSYNC_INV 27
#define DMA1_CKEY 24
#define DMA1_CARRY 23
#define DMA1_LNBUF_ENA 22
#define DMA1_GATED_ENA 21
#define DMA1_PWRDN_ENA 20
#define DMA1_DSCALE 18
#define DMA1_ALPHA_MODE 16
#define DMA1_ALPHA 08
#define DMA1_PXLCMD 00
/* /*
* defined for Configure Dumb Mode
* defined for Configure Dumb Mode * defined for Configure Dumb Mode
* DUMB LCD Panel bit[31:28] * DUMB LCD Panel bit[31:28]
*/ */
...@@ -1050,18 +1001,6 @@ struct lcd_regs { ...@@ -1050,18 +1001,6 @@ struct lcd_regs {
#define CFG_CYC_BURST_LEN16 (1<<4) #define CFG_CYC_BURST_LEN16 (1<<4)
#define CFG_CYC_BURST_LEN8 (0<<4) #define CFG_CYC_BURST_LEN8 (0<<4)
/*
* defined Dumb Panel Clock Divider register
* SCLK_Source bit[31]
*/
/* 0: PLL clock select*/
#define AXI_BUS_SEL 0x80000000
#define CCD_CLK_SEL 0x40000000
#define DCON_CLK_SEL 0x20000000
#define ENA_CLK_INT_DIV CONFIG_FB_DOVE_CLCD_SCLK_DIV
#define IDLE_CLK_INT_DIV 0x1 /* idle Integer Divider */
#define DIS_CLK_INT_DIV 0x0 /* Disable Integer Divider */
/* SRAM ID */ /* SRAM ID */
#define SRAMID_GAMMA_YR 0x0 #define SRAMID_GAMMA_YR 0x0
#define SRAMID_GAMMA_UG 0x1 #define SRAMID_GAMMA_UG 0x1
...@@ -1471,422 +1410,6 @@ struct dsi_regs { ...@@ -1471,422 +1410,6 @@ struct dsi_regs {
#define LVDS_FREQ_OFFSET_MODE_CK_DIV4_OUT (0x1 << 1) #define LVDS_FREQ_OFFSET_MODE_CK_DIV4_OUT (0x1 << 1)
#define LVDS_FREQ_OFFSET_MODE_EN (0x1 << 0) #define LVDS_FREQ_OFFSET_MODE_EN (0x1 << 0)
/* VDMA */
struct vdma_ch_regs {
#define VDMA_DC_SADDR_1 0x320
#define VDMA_DC_SADDR_2 0x3A0
#define VDMA_DC_SZ_1 0x324
#define VDMA_DC_SZ_2 0x3A4
#define VDMA_CTRL_1 0x328
#define VDMA_CTRL_2 0x3A8
#define VDMA_SRC_SZ_1 0x32C
#define VDMA_SRC_SZ_2 0x3AC
#define VDMA_SA_1 0x330
#define VDMA_SA_2 0x3B0
#define VDMA_DA_1 0x334
#define VDMA_DA_2 0x3B4
#define VDMA_SZ_1 0x338
#define VDMA_SZ_2 0x3B8
u32 dc_saddr;
u32 dc_size;
u32 ctrl;
u32 src_size;
u32 src_addr;
u32 dst_addr;
u32 dst_size;
#define VDMA_PITCH_1 0x33C
#define VDMA_PITCH_2 0x3BC
#define VDMA_ROT_CTRL_1 0x340
#define VDMA_ROT_CTRL_2 0x3C0
#define VDMA_RAM_CTRL0_1 0x344
#define VDMA_RAM_CTRL0_2 0x3C4
#define VDMA_RAM_CTRL1_1 0x348
#define VDMA_RAM_CTRL1_2 0x3C8
u32 pitch;
u32 rot_ctrl;
u32 ram_ctrl0;
u32 ram_ctrl1;
};
struct vdma_regs {
#define VDMA_ARBR_CTRL 0x300
#define VDMA_IRQR 0x304
#define VDMA_IRQM 0x308
#define VDMA_IRQS 0x30C
#define VDMA_MDMA_ARBR_CTRL 0x310
u32 arbr_ctr;
u32 irq_raw;
u32 irq_mask;
u32 irq_status;
u32 mdma_arbr_ctrl;
u32 reserved[3];
struct vdma_ch_regs ch1;
u32 reserved2[21];
struct vdma_ch_regs ch2;
};
/* CMU */
#define CMU_PIP_DE_H_CFG 0x0008
#define CMU_PRI1_H_CFG 0x000C
#define CMU_PRI2_H_CFG 0x0010
#define CMU_ACE_MAIN_DE1_H_CFG 0x0014
#define CMU_ACE_MAIN_DE2_H_CFG 0x0018
#define CMU_ACE_PIP_DE1_H_CFG 0x001C
#define CMU_ACE_PIP_DE2_H_CFG 0x0020
#define CMU_PIP_DE_V_CFG 0x0024
#define CMU_PRI_V_CFG 0x0028
#define CMU_ACE_MAIN_DE_V_CFG 0x002C
#define CMU_ACE_PIP_DE_V_CFG 0x0030
#define CMU_BAR_0_CFG 0x0034
#define CMU_BAR_1_CFG 0x0038
#define CMU_BAR_2_CFG 0x003C
#define CMU_BAR_3_CFG 0x0040
#define CMU_BAR_4_CFG 0x0044
#define CMU_BAR_5_CFG 0x0048
#define CMU_BAR_6_CFG 0x004C
#define CMU_BAR_7_CFG 0x0050
#define CMU_BAR_8_CFG 0x0054
#define CMU_BAR_9_CFG 0x0058
#define CMU_BAR_10_CFG 0x005C
#define CMU_BAR_11_CFG 0x0060
#define CMU_BAR_12_CFG 0x0064
#define CMU_BAR_13_CFG 0x0068
#define CMU_BAR_14_CFG 0x006C
#define CMU_BAR_15_CFG 0x0070
#define CMU_BAR_CTRL 0x0074
#define PATTERN_TOTAL 0x0078
#define PATTERN_ACTIVE 0x007C
#define PATTERN_FRONT_PORCH 0x0080
#define PATTERN_BACK_PORCH 0x0084
#define CMU_CLK_CTRL 0x0088
#define CMU_ICSC_M_C0_L 0x0900
#define CMU_ICSC_M_C0_H 0x0901
#define CMU_ICSC_M_C1_L 0x0902
#define CMU_ICSC_M_C1_H 0x0903
#define CMU_ICSC_M_C2_L 0x0904
#define CMU_ICSC_M_C2_H 0x0905
#define CMU_ICSC_M_C3_L 0x0906
#define CMU_ICSC_M_C3_H 0x0907
#define CMU_ICSC_M_C4_L 0x0908
#define CMU_ICSC_M_C4_H 0x0909
#define CMU_ICSC_M_C5_L 0x090A
#define CMU_ICSC_M_C5_H 0x090B
#define CMU_ICSC_M_C6_L 0x090C
#define CMU_ICSC_M_C6_H 0x090D
#define CMU_ICSC_M_C7_L 0x090E
#define CMU_ICSC_M_C7_H 0x090F
#define CMU_ICSC_M_C8_L 0x0910
#define CMU_ICSC_M_C8_H 0x0911
#define CMU_ICSC_M_O1_0 0x0914
#define CMU_ICSC_M_O1_1 0x0915
#define CMU_ICSC_M_O1_2 0x0916
#define CMU_ICSC_M_O2_0 0x0918
#define CMU_ICSC_M_O2_1 0x0919
#define CMU_ICSC_M_O2_2 0x091A
#define CMU_ICSC_M_O3_0 0x091C
#define CMU_ICSC_M_O3_1 0x091D
#define CMU_ICSC_M_O3_2 0x091E
#define CMU_ICSC_P_C0_L 0x0920
#define CMU_ICSC_P_C0_H 0x0921
#define CMU_ICSC_P_C1_L 0x0922
#define CMU_ICSC_P_C1_H 0x0923
#define CMU_ICSC_P_C2_L 0x0924
#define CMU_ICSC_P_C2_H 0x0925
#define CMU_ICSC_P_C3_L 0x0926
#define CMU_ICSC_P_C3_H 0x0927
#define CMU_ICSC_P_C4_L 0x0928
#define CMU_ICSC_P_C4_H 0x0929
#define CMU_ICSC_P_C5_L 0x092A
#define CMU_ICSC_P_C5_H 0x092B
#define CMU_ICSC_P_C6_L 0x092C
#define CMU_ICSC_P_C6_H 0x092D
#define CMU_ICSC_P_C7_L 0x092E
#define CMU_ICSC_P_C7_H 0x092F
#define CMU_ICSC_P_C8_L 0x0930
#define CMU_ICSC_P_C8_H 0x0931
#define CMU_ICSC_P_O1_0 0x0934
#define CMU_ICSC_P_O1_1 0x0935
#define CMU_ICSC_P_O1_2 0x0936
#define CMU_ICSC_P_O2_0 0x0938
#define CMU_ICSC_P_O2_1 0x0939
#define CMU_ICSC_P_O2_2 0x093A
#define CMU_ICSC_P_O3_0 0x093C
#define CMU_ICSC_P_O3_1 0x093D
#define CMU_ICSC_P_O3_2 0x093E
#define CMU_BR_M_EN 0x0940
#define CMU_BR_M_TH1_L 0x0942
#define CMU_BR_M_TH1_H 0x0943
#define CMU_BR_M_TH2_L 0x0944
#define CMU_BR_M_TH2_H 0x0945
#define CMU_ACE_M_EN 0x0950
#define CMU_ACE_M_WFG1 0x0951
#define CMU_ACE_M_WFG2 0x0952
#define CMU_ACE_M_WFG3 0x0953
#define CMU_ACE_M_TH0 0x0954
#define CMU_ACE_M_TH1 0x0955
#define CMU_ACE_M_TH2 0x0956
#define CMU_ACE_M_TH3 0x0957
#define CMU_ACE_M_TH4 0x0958
#define CMU_ACE_M_TH5 0x0959
#define CMU_ACE_M_OP0_L 0x095A
#define CMU_ACE_M_OP0_H 0x095B
#define CMU_ACE_M_OP5_L 0x095C
#define CMU_ACE_M_OP5_H 0x095D
#define CMU_ACE_M_GB2 0x095E
#define CMU_ACE_M_GB3 0x095F
#define CMU_ACE_M_MS1 0x0960
#define CMU_ACE_M_MS2 0x0961
#define CMU_ACE_M_MS3 0x0962
#define CMU_BR_P_EN 0x0970
#define CMU_BR_P_TH1_L 0x0972
#define CMU_BR_P_TH1_H 0x0973
#define CMU_BR_P_TH2_L 0x0974
#define CMU_BR_P_TH2_H 0x0975
#define CMU_ACE_P_EN 0x0980
#define CMU_ACE_P_WFG1 0x0981
#define CMU_ACE_P_WFG2 0x0982
#define CMU_ACE_P_WFG3 0x0983
#define CMU_ACE_P_TH0 0x0984
#define CMU_ACE_P_TH1 0x0985
#define CMU_ACE_P_TH2 0x0986
#define CMU_ACE_P_TH3 0x0987
#define CMU_ACE_P_TH4 0x0988
#define CMU_ACE_P_TH5 0x0989
#define CMU_ACE_P_OP0_L 0x098A
#define CMU_ACE_P_OP0_H 0x098B
#define CMU_ACE_P_OP5_L 0x098C
#define CMU_ACE_P_OP5_H 0x098D
#define CMU_ACE_P_GB2 0x098E
#define CMU_ACE_P_GB3 0x098F
#define CMU_ACE_P_MS1 0x0990
#define CMU_ACE_P_MS2 0x0991
#define CMU_ACE_P_MS3 0x0992
#define CMU_FTDC_M_EN 0x09A0
#define CMU_FTDC_P_EN 0x09A1
#define CMU_FTDC_INLOW_L 0x09A2
#define CMU_FTDC_INLOW_H 0x09A3
#define CMU_FTDC_INHIGH_L 0x09A4
#define CMU_FTDC_INHIGH_H 0x09A5
#define CMU_FTDC_OUTLOW_L 0x09A6
#define CMU_FTDC_OUTLOW_H 0x09A7
#define CMU_FTDC_OUTHIGH_L 0x09A8
#define CMU_FTDC_OUTHIGH_H 0x09A9
#define CMU_FTDC_YLOW 0x09AA
#define CMU_FTDC_YHIGH 0x09AB
#define CMU_FTDC_CH1 0x09AC
#define CMU_FTDC_CH2_L 0x09AE
#define CMU_FTDC_CH2_H 0x09AF
#define CMU_FTDC_CH3_L 0x09B0
#define CMU_FTDC_CH3_H 0x09B1
#define CMU_FTDC_1_C00_6 0x09B2
#define CMU_FTDC_1_C01_6 0x09B8
#define CMU_FTDC_1_C11_6 0x09BE
#define CMU_FTDC_1_C10_6 0x09C4
#define CMU_FTDC_1_OFF00_6 0x09CA
#define CMU_FTDC_1_OFF10_6 0x09D0
#define CMU_HS_M_EN 0x0A00
#define CMU_HS_M_AX1_L 0x0A02
#define CMU_HS_M_AX1_H 0x0A03
#define CMU_HS_M_AX2_L 0x0A04
#define CMU_HS_M_AX2_H 0x0A05
#define CMU_HS_M_AX3_L 0x0A06
#define CMU_HS_M_AX3_H 0x0A07
#define CMU_HS_M_AX4_L 0x0A08
#define CMU_HS_M_AX4_H 0x0A09
#define CMU_HS_M_AX5_L 0x0A0A
#define CMU_HS_M_AX5_H 0x0A0B
#define CMU_HS_M_AX6_L 0x0A0C
#define CMU_HS_M_AX6_H 0x0A0D
#define CMU_HS_M_AX7_L 0x0A0E
#define CMU_HS_M_AX7_H 0x0A0F
#define CMU_HS_M_AX8_L 0x0A10
#define CMU_HS_M_AX8_H 0x0A11
#define CMU_HS_M_AX9_L 0x0A12
#define CMU_HS_M_AX9_H 0x0A13
#define CMU_HS_M_AX10_L 0x0A14
#define CMU_HS_M_AX10_H 0x0A15
#define CMU_HS_M_AX11_L 0x0A16
#define CMU_HS_M_AX11_H 0x0A17
#define CMU_HS_M_AX12_L 0x0A18
#define CMU_HS_M_AX12_H 0x0A19
#define CMU_HS_M_AX13_L 0x0A1A
#define CMU_HS_M_AX13_H 0x0A1B
#define CMU_HS_M_AX14_L 0x0A1C
#define CMU_HS_M_AX14_H 0x0A1D
#define CMU_HS_M_H1_H14 0x0A1E
#define CMU_HS_M_S1_S14 0x0A2C
#define CMU_HS_M_GL 0x0A3A
#define CMU_HS_M_MAXSAT_RGB_Y_L 0x0A3C
#define CMU_HS_M_MAXSAT_RGB_Y_H 0x0A3D
#define CMU_HS_M_MAXSAT_RCR_L 0x0A3E
#define CMU_HS_M_MAXSAT_RCR_H 0x0A3F
#define CMU_HS_M_MAXSAT_RCB_L 0x0A40
#define CMU_HS_M_MAXSAT_RCB_H 0x0A41
#define CMU_HS_M_MAXSAT_GCR_L 0x0A42
#define CMU_HS_M_MAXSAT_GCR_H 0x0A43
#define CMU_HS_M_MAXSAT_GCB_L 0x0A44
#define CMU_HS_M_MAXSAT_GCB_H 0x0A45
#define CMU_HS_M_MAXSAT_BCR_L 0x0A46
#define CMU_HS_M_MAXSAT_BCR_H 0x0A47
#define CMU_HS_M_MAXSAT_BCB_L 0x0A48
#define CMU_HS_M_MAXSAT_BCB_H 0x0A49
#define CMU_HS_M_ROFF_L 0x0A4A
#define CMU_HS_M_ROFF_H 0x0A4B
#define CMU_HS_M_GOFF_L 0x0A4C
#define CMU_HS_M_GOFF_H 0x0A4D
#define CMU_HS_M_BOFF_L 0x0A4E
#define CMU_HS_M_BOFF_H 0x0A4F
#define CMU_HS_P_EN 0x0A50
#define CMU_HS_P_AX1_L 0x0A52
#define CMU_HS_P_AX1_H 0x0A53
#define CMU_HS_P_AX2_L 0x0A54
#define CMU_HS_P_AX2_H 0x0A55
#define CMU_HS_P_AX3_L 0x0A56
#define CMU_HS_P_AX3_H 0x0A57
#define CMU_HS_P_AX4_L 0x0A58
#define CMU_HS_P_AX4_H 0x0A59
#define CMU_HS_P_AX5_L 0x0A5A
#define CMU_HS_P_AX5_H 0x0A5B
#define CMU_HS_P_AX6_L 0x0A5C
#define CMU_HS_P_AX6_H 0x0A5D
#define CMU_HS_P_AX7_L 0x0A5E
#define CMU_HS_P_AX7_H 0x0A5F
#define CMU_HS_P_AX8_L 0x0A60
#define CMU_HS_P_AX8_H 0x0A61
#define CMU_HS_P_AX9_L 0x0A62
#define CMU_HS_P_AX9_H 0x0A63
#define CMU_HS_P_AX10_L 0x0A64
#define CMU_HS_P_AX10_H 0x0A65
#define CMU_HS_P_AX11_L 0x0A66
#define CMU_HS_P_AX11_H 0x0A67
#define CMU_HS_P_AX12_L 0x0A68
#define CMU_HS_P_AX12_H 0x0A69
#define CMU_HS_P_AX13_L 0x0A6A
#define CMU_HS_P_AX13_H 0x0A6B
#define CMU_HS_P_AX14_L 0x0A6C
#define CMU_HS_P_AX14_H 0x0A6D
#define CMU_HS_P_H1_H14 0x0A6E
#define CMU_HS_P_S1_S14 0x0A7C
#define CMU_HS_P_GL 0x0A8A
#define CMU_HS_P_MAXSAT_RGB_Y_L 0x0A8C
#define CMU_HS_P_MAXSAT_RGB_Y_H 0x0A8D
#define CMU_HS_P_MAXSAT_RCR_L 0x0A8E
#define CMU_HS_P_MAXSAT_RCR_H 0x0A8F
#define CMU_HS_P_MAXSAT_RCB_L 0x0A90
#define CMU_HS_P_MAXSAT_RCB_H 0x0A91
#define CMU_HS_P_MAXSAT_GCR_L 0x0A92
#define CMU_HS_P_MAXSAT_GCR_H 0x0A93
#define CMU_HS_P_MAXSAT_GCB_L 0x0A94
#define CMU_HS_P_MAXSAT_GCB_H 0x0A95
#define CMU_HS_P_MAXSAT_BCR_L 0x0A96
#define CMU_HS_P_MAXSAT_BCR_H 0x0A97
#define CMU_HS_P_MAXSAT_BCB_L 0x0A98
#define CMU_HS_P_MAXSAT_BCB_H 0x0A99
#define CMU_HS_P_ROFF_L 0x0A9A
#define CMU_HS_P_ROFF_H 0x0A9B
#define CMU_HS_P_GOFF_L 0x0A9C
#define CMU_HS_P_GOFF_H 0x0A9D
#define CMU_HS_P_BOFF_L 0x0A9E
#define CMU_HS_P_BOFF_H 0x0A9F
#define CMU_GLCSC_M_C0_L 0x0AA0
#define CMU_GLCSC_M_C0_H 0x0AA1
#define CMU_GLCSC_M_C1_L 0x0AA2
#define CMU_GLCSC_M_C1_H 0x0AA3
#define CMU_GLCSC_M_C2_L 0x0AA4
#define CMU_GLCSC_M_C2_H 0x0AA5
#define CMU_GLCSC_M_C3_L 0x0AA6
#define CMU_GLCSC_M_C3_H 0x0AA7
#define CMU_GLCSC_M_C4_L 0x0AA8
#define CMU_GLCSC_M_C4_H 0x0AA9
#define CMU_GLCSC_M_C5_L 0x0AAA
#define CMU_GLCSC_M_C5_H 0x0AAB
#define CMU_GLCSC_M_C6_L 0x0AAC
#define CMU_GLCSC_M_C6_H 0x0AAD
#define CMU_GLCSC_M_C7_L 0x0AAE
#define CMU_GLCSC_M_C7_H 0x0AAF
#define CMU_GLCSC_M_C8_L 0x0AB0
#define CMU_GLCSC_M_C8_H 0x0AB1
#define CMU_GLCSC_M_O1_1 0x0AB4
#define CMU_GLCSC_M_O1_2 0x0AB5
#define CMU_GLCSC_M_O1_3 0x0AB6
#define CMU_GLCSC_M_O2_1 0x0AB8
#define CMU_GLCSC_M_O2_2 0x0AB9
#define CMU_GLCSC_M_O2_3 0x0ABA
#define CMU_GLCSC_M_O3_1 0x0ABC
#define CMU_GLCSC_M_O3_2 0x0ABD
#define CMU_GLCSC_M_O3_3 0x0ABE
#define CMU_GLCSC_P_C0_L 0x0AC0
#define CMU_GLCSC_P_C0_H 0x0AC1
#define CMU_GLCSC_P_C1_L 0x0AC2
#define CMU_GLCSC_P_C1_H 0x0AC3
#define CMU_GLCSC_P_C2_L 0x0AC4
#define CMU_GLCSC_P_C2_H 0x0AC5
#define CMU_GLCSC_P_C3_L 0x0AC6
#define CMU_GLCSC_P_C3_H 0x0AC7
#define CMU_GLCSC_P_C4_L 0x0AC8
#define CMU_GLCSC_P_C4_H 0x0AC9
#define CMU_GLCSC_P_C5_L 0x0ACA
#define CMU_GLCSC_P_C5_H 0x0ACB
#define CMU_GLCSC_P_C6_L 0x0ACC
#define CMU_GLCSC_P_C6_H 0x0ACD
#define CMU_GLCSC_P_C7_L 0x0ACE
#define CMU_GLCSC_P_C7_H 0x0ACF
#define CMU_GLCSC_P_C8_L 0x0AD0
#define CMU_GLCSC_P_C8_H 0x0AD1
#define CMU_GLCSC_P_O1_1 0x0AD4
#define CMU_GLCSC_P_O1_2 0x0AD5
#define CMU_GLCSC_P_O1_3 0x0AD6
#define CMU_GLCSC_P_O2_1 0x0AD8
#define CMU_GLCSC_P_O2_2 0x0AD9
#define CMU_GLCSC_P_O2_3 0x0ADA
#define CMU_GLCSC_P_O3_1 0x0ADC
#define CMU_GLCSC_P_O3_2 0x0ADD
#define CMU_GLCSC_P_O3_3 0x0ADE
#define CMU_PIXVAL_M_EN 0x0AE0
#define CMU_PIXVAL_P_EN 0x0AE1
#define CMU_CLK_CTRL_TCLK 0x0
#define CMU_CLK_CTRL_SCLK 0x2
#define CMU_CLK_CTRL_MSK 0x2
#define CMU_CLK_CTRL_ENABLE 0x1
#define LCD_TOP_CTRL_TV 0x2
#define LCD_TOP_CTRL_PN 0x0
#define LCD_TOP_CTRL_SEL_MSK 0x2
#define LCD_IO_CMU_IN_SEL_MSK (0x3 << 20)
#define LCD_IO_CMU_IN_SEL_TV 0
#define LCD_IO_CMU_IN_SEL_PN 1
#define LCD_IO_CMU_IN_SEL_PN2 2
#define LCD_IO_TV_OUT_SEL_MSK (0x3 << 26)
#define LCD_IO_PN_OUT_SEL_MSK (0x3 << 24)
#define LCD_IO_PN2_OUT_SEL_MSK (0x3 << 28)
#define LCD_IO_TV_OUT_SEL_NON 3
#define LCD_IO_PN_OUT_SEL_NON 3
#define LCD_IO_PN2_OUT_SEL_NON 3
#define LCD_TOP_CTRL_CMU_ENABLE 0x1
#define LCD_IO_OVERL_MSK 0xC00000
#define LCD_IO_OVERL_TV 0x0
#define LCD_IO_OVERL_LCD1 0x400000
#define LCD_IO_OVERL_LCD2 0xC00000
#define HINVERT_MSK 0x4
#define VINVERT_MSK 0x8
#define HINVERT_LEN 0x2
#define VINVERT_LEN 0x3
#define CMU_CTRL 0x88
#define CMU_CTRL_A0_MSK 0x6
#define CMU_CTRL_A0_TV 0x0
#define CMU_CTRL_A0_LCD1 0x1
#define CMU_CTRL_A0_LCD2 0x2
#define CMU_CTRL_A0_HDMI 0x3
#define ICR_DRV_ROUTE_OFF 0x0
#define ICR_DRV_ROUTE_TV 0x1
#define ICR_DRV_ROUTE_LCD1 0x2
#define ICR_DRV_ROUTE_LCD2 0x3
enum { enum {
PATH_PN = 0, PATH_PN = 0,
PATH_TV, PATH_TV,
......
...@@ -10,7 +10,7 @@ obj-y := open.o read_write.o file_table.o super.o \ ...@@ -10,7 +10,7 @@ obj-y := open.o read_write.o file_table.o super.o \
ioctl.o readdir.o select.o fifo.o dcache.o inode.o \ ioctl.o readdir.o select.o fifo.o dcache.o inode.o \
attr.o bad_inode.o file.o filesystems.o namespace.o \ attr.o bad_inode.o file.o filesystems.o namespace.o \
seq_file.o xattr.o libfs.o fs-writeback.o \ seq_file.o xattr.o libfs.o fs-writeback.o \
pnode.o drop_caches.o splice.o sync.o utimes.o \ pnode.o splice.o sync.o utimes.o \
stack.o fs_struct.o statfs.o stack.o fs_struct.o statfs.o
ifeq ($(CONFIG_BLOCK),y) ifeq ($(CONFIG_BLOCK),y)
...@@ -49,6 +49,7 @@ obj-$(CONFIG_FS_POSIX_ACL) += posix_acl.o xattr_acl.o ...@@ -49,6 +49,7 @@ obj-$(CONFIG_FS_POSIX_ACL) += posix_acl.o xattr_acl.o
obj-$(CONFIG_NFS_COMMON) += nfs_common/ obj-$(CONFIG_NFS_COMMON) += nfs_common/
obj-$(CONFIG_GENERIC_ACL) += generic_acl.o obj-$(CONFIG_GENERIC_ACL) += generic_acl.o
obj-$(CONFIG_COREDUMP) += coredump.o obj-$(CONFIG_COREDUMP) += coredump.o
obj-$(CONFIG_SYSCTL) += drop_caches.o
obj-$(CONFIG_FHANDLE) += fhandle.o obj-$(CONFIG_FHANDLE) += fhandle.o
......
...@@ -865,8 +865,6 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, ...@@ -865,8 +865,6 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
/* Link the buffer to its page */ /* Link the buffer to its page */
set_bh_page(bh, page, offset); set_bh_page(bh, page, offset);
init_buffer(bh, NULL, NULL);
} }
return head; return head;
/* /*
...@@ -2949,7 +2947,7 @@ static void guard_bh_eod(int rw, struct bio *bio, struct buffer_head *bh) ...@@ -2949,7 +2947,7 @@ static void guard_bh_eod(int rw, struct bio *bio, struct buffer_head *bh)
} }
} }
int submit_bh(int rw, struct buffer_head * bh) int _submit_bh(int rw, struct buffer_head *bh, unsigned long bio_flags)
{ {
struct bio *bio; struct bio *bio;
int ret = 0; int ret = 0;
...@@ -2984,6 +2982,7 @@ int submit_bh(int rw, struct buffer_head * bh) ...@@ -2984,6 +2982,7 @@ int submit_bh(int rw, struct buffer_head * bh)
bio->bi_end_io = end_bio_bh_io_sync; bio->bi_end_io = end_bio_bh_io_sync;
bio->bi_private = bh; bio->bi_private = bh;
bio->bi_flags |= bio_flags;
/* Take care of bh's that straddle the end of the device */ /* Take care of bh's that straddle the end of the device */
guard_bh_eod(rw, bio, bh); guard_bh_eod(rw, bio, bh);
...@@ -2997,6 +2996,12 @@ int submit_bh(int rw, struct buffer_head * bh) ...@@ -2997,6 +2996,12 @@ int submit_bh(int rw, struct buffer_head * bh)
bio_put(bio); bio_put(bio);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(_submit_bh);
int submit_bh(int rw, struct buffer_head *bh)
{
return _submit_bh(rw, bh, 0);
}
EXPORT_SYMBOL(submit_bh); EXPORT_SYMBOL(submit_bh);
/** /**
......
...@@ -672,12 +672,6 @@ static inline int dio_send_cur_page(struct dio *dio, struct dio_submit *sdio, ...@@ -672,12 +672,6 @@ static inline int dio_send_cur_page(struct dio *dio, struct dio_submit *sdio,
if (sdio->final_block_in_bio != sdio->cur_page_block || if (sdio->final_block_in_bio != sdio->cur_page_block ||
cur_offset != bio_next_offset) cur_offset != bio_next_offset)
dio_bio_submit(dio, sdio); dio_bio_submit(dio, sdio);
/*
* Submit now if the underlying fs is about to perform a
* metadata read
*/
else if (sdio->boundary)
dio_bio_submit(dio, sdio);
} }
if (sdio->bio == NULL) { if (sdio->bio == NULL) {
...@@ -737,16 +731,6 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page, ...@@ -737,16 +731,6 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
sdio->cur_page_block + sdio->cur_page_block +
(sdio->cur_page_len >> sdio->blkbits) == blocknr) { (sdio->cur_page_len >> sdio->blkbits) == blocknr) {
sdio->cur_page_len += len; sdio->cur_page_len += len;
/*
* If sdio->boundary then we want to schedule the IO now to
* avoid metadata seeks.
*/
if (sdio->boundary) {
ret = dio_send_cur_page(dio, sdio, map_bh);
page_cache_release(sdio->cur_page);
sdio->cur_page = NULL;
}
goto out; goto out;
} }
...@@ -758,7 +742,7 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page, ...@@ -758,7 +742,7 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
page_cache_release(sdio->cur_page); page_cache_release(sdio->cur_page);
sdio->cur_page = NULL; sdio->cur_page = NULL;
if (ret) if (ret)
goto out; return ret;
} }
page_cache_get(page); /* It is in dio */ page_cache_get(page); /* It is in dio */
...@@ -768,6 +752,16 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page, ...@@ -768,6 +752,16 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
sdio->cur_page_block = blocknr; sdio->cur_page_block = blocknr;
sdio->cur_page_fs_offset = sdio->block_in_file << sdio->blkbits; sdio->cur_page_fs_offset = sdio->block_in_file << sdio->blkbits;
out: out:
/*
* If sdio->boundary then we want to schedule the IO now to
* avoid metadata seeks.
*/
if (sdio->boundary) {
ret = dio_send_cur_page(dio, sdio, map_bh);
dio_bio_submit(dio, sdio);
page_cache_release(sdio->cur_page);
sdio->cur_page = NULL;
}
return ret; return ret;
} }
...@@ -969,7 +963,8 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, ...@@ -969,7 +963,8 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio,
this_chunk_bytes = this_chunk_blocks << blkbits; this_chunk_bytes = this_chunk_blocks << blkbits;
BUG_ON(this_chunk_bytes == 0); BUG_ON(this_chunk_bytes == 0);
sdio->boundary = buffer_boundary(map_bh); if (this_chunk_blocks == sdio->blocks_available)
sdio->boundary = buffer_boundary(map_bh);
ret = submit_page_section(dio, sdio, page, ret = submit_page_section(dio, sdio, page,
offset_in_page, offset_in_page,
this_chunk_bytes, this_chunk_bytes,
......
...@@ -613,7 +613,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift) ...@@ -613,7 +613,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
* when the old and new regions overlap clear from new_end. * when the old and new regions overlap clear from new_end.
*/ */
free_pgd_range(&tlb, new_end, old_end, new_end, free_pgd_range(&tlb, new_end, old_end, new_end,
vma->vm_next ? vma->vm_next->vm_start : 0); vma->vm_next ? vma->vm_next->vm_start : USER_PGTABLES_CEILING);
} else { } else {
/* /*
* otherwise, clean from old_start; this is done to not touch * otherwise, clean from old_start; this is done to not touch
...@@ -622,7 +622,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift) ...@@ -622,7 +622,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
* for the others its just a little faster. * for the others its just a little faster.
*/ */
free_pgd_range(&tlb, old_start, old_end, new_end, free_pgd_range(&tlb, old_start, old_end, new_end,
vma->vm_next ? vma->vm_next->vm_start : 0); vma->vm_next ? vma->vm_next->vm_start : USER_PGTABLES_CEILING);
} }
tlb_finish_mmu(&tlb, new_end, old_end); tlb_finish_mmu(&tlb, new_end, old_end);
......
...@@ -2067,7 +2067,6 @@ static int ext3_fill_super (struct super_block *sb, void *data, int silent) ...@@ -2067,7 +2067,6 @@ static int ext3_fill_super (struct super_block *sb, void *data, int silent)
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA ? "journal": test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA ? "journal":
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA ? "ordered": test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA ? "ordered":
"writeback"); "writeback");
sb->s_flags |= MS_SNAP_STABLE;
return 0; return 0;
......
...@@ -287,5 +287,5 @@ const struct file_operations fscache_stats_fops = { ...@@ -287,5 +287,5 @@ const struct file_operations fscache_stats_fops = {
.open = fscache_stats_open, .open = fscache_stats_open,
.read = seq_read, .read = seq_read,
.llseek = seq_lseek, .llseek = seq_lseek,
.release = seq_release, .release = single_release,
}; };
...@@ -162,8 +162,17 @@ static void journal_do_submit_data(struct buffer_head **wbuf, int bufs, ...@@ -162,8 +162,17 @@ static void journal_do_submit_data(struct buffer_head **wbuf, int bufs,
for (i = 0; i < bufs; i++) { for (i = 0; i < bufs; i++) {
wbuf[i]->b_end_io = end_buffer_write_sync; wbuf[i]->b_end_io = end_buffer_write_sync;
/* We use-up our safety reference in submit_bh() */ /*
submit_bh(write_op, wbuf[i]); * Here we write back pagecache data that may be mmaped. Since
* we cannot afford to clean the page and set PageWriteback
* here due to lock ordering (page lock ranks above transaction
* start), the data can change while IO is in flight. Tell the
* block layer it should bounce the bio pages if stable data
* during write is required.
*
* We use up our safety reference in submit_bh().
*/
_submit_bh(write_op, wbuf[i], 1 << BIO_SNAP_STABLE);
} }
} }
...@@ -667,7 +676,17 @@ void journal_commit_transaction(journal_t *journal) ...@@ -667,7 +676,17 @@ void journal_commit_transaction(journal_t *journal)
clear_buffer_dirty(bh); clear_buffer_dirty(bh);
set_buffer_uptodate(bh); set_buffer_uptodate(bh);
bh->b_end_io = journal_end_buffer_io_sync; bh->b_end_io = journal_end_buffer_io_sync;
submit_bh(write_op, bh); /*
* In data=journal mode, here we can end up
* writing pagecache data that might be
* mmapped. Since we can't afford to clean the
* page and set PageWriteback (see the comment
* near the other use of _submit_bh()), the
* data can change while the write is in
* flight. Tell the block layer to bounce the
* bio pages if stable pages are required.
*/
_submit_bh(write_op, bh, 1 << BIO_SNAP_STABLE);
} }
cond_resched(); cond_resched();
......
...@@ -310,8 +310,6 @@ int journal_write_metadata_buffer(transaction_t *transaction, ...@@ -310,8 +310,6 @@ int journal_write_metadata_buffer(transaction_t *transaction,
new_bh = alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL); new_bh = alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL);
/* keep subsequent assertions sane */ /* keep subsequent assertions sane */
new_bh->b_state = 0;
init_buffer(new_bh, NULL, NULL);
atomic_set(&new_bh->b_count, 1); atomic_set(&new_bh->b_count, 1);
new_jh = journal_add_journal_head(new_bh); /* This sleeps */ new_jh = journal_add_journal_head(new_bh); /* This sleeps */
......
...@@ -367,8 +367,6 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction, ...@@ -367,8 +367,6 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction,
} }
/* keep subsequent assertions sane */ /* keep subsequent assertions sane */
new_bh->b_state = 0;
init_buffer(new_bh, NULL, NULL);
atomic_set(&new_bh->b_count, 1); atomic_set(&new_bh->b_count, 1);
new_jh = jbd2_journal_add_journal_head(new_bh); /* This sleeps */ new_jh = jbd2_journal_add_journal_head(new_bh); /* This sleeps */
......
...@@ -1498,10 +1498,8 @@ int dlm_mig_lockres_handler(struct o2net_msg *msg, u32 len, void *data, ...@@ -1498,10 +1498,8 @@ int dlm_mig_lockres_handler(struct o2net_msg *msg, u32 len, void *data,
dlm_put(dlm); dlm_put(dlm);
if (ret < 0) { if (ret < 0) {
if (buf) kfree(buf);
kfree(buf); kfree(item);
if (item)
kfree(item);
mlog_errno(ret); mlog_errno(ret);
} }
......
...@@ -101,13 +101,6 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags, ...@@ -101,13 +101,6 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags,
if (!S_ISDIR(inode->i_mode)) if (!S_ISDIR(inode->i_mode))
flags &= ~OCFS2_DIRSYNC_FL; flags &= ~OCFS2_DIRSYNC_FL;
handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS);
if (IS_ERR(handle)) {
status = PTR_ERR(handle);
mlog_errno(status);
goto bail_unlock;
}
oldflags = ocfs2_inode->ip_attr; oldflags = ocfs2_inode->ip_attr;
flags = flags & mask; flags = flags & mask;
flags |= oldflags & ~mask; flags |= oldflags & ~mask;
...@@ -120,7 +113,14 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags, ...@@ -120,7 +113,14 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags,
if ((oldflags & OCFS2_IMMUTABLE_FL) || ((flags ^ oldflags) & if ((oldflags & OCFS2_IMMUTABLE_FL) || ((flags ^ oldflags) &
(OCFS2_APPEND_FL | OCFS2_IMMUTABLE_FL))) { (OCFS2_APPEND_FL | OCFS2_IMMUTABLE_FL))) {
if (!capable(CAP_LINUX_IMMUTABLE)) if (!capable(CAP_LINUX_IMMUTABLE))
goto bail_commit; goto bail_unlock;
}
handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS);
if (IS_ERR(handle)) {
status = PTR_ERR(handle);
mlog_errno(status);
goto bail_unlock;
} }
ocfs2_inode->ip_attr = flags; ocfs2_inode->ip_attr = flags;
...@@ -130,8 +130,8 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags, ...@@ -130,8 +130,8 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags,
if (status < 0) if (status < 0)
mlog_errno(status); mlog_errno(status);
bail_commit:
ocfs2_commit_trans(osb, handle); ocfs2_commit_trans(osb, handle);
bail_unlock: bail_unlock:
ocfs2_inode_unlock(inode, 1); ocfs2_inode_unlock(inode, 1);
bail: bail:
...@@ -706,8 +706,10 @@ int ocfs2_info_handle_freefrag(struct inode *inode, ...@@ -706,8 +706,10 @@ int ocfs2_info_handle_freefrag(struct inode *inode,
o2info_set_request_filled(&oiff->iff_req); o2info_set_request_filled(&oiff->iff_req);
if (o2info_to_user(*oiff, req)) if (o2info_to_user(*oiff, req)) {
status = -EFAULT;
goto bail; goto bail;
}
status = 0; status = 0;
bail: bail:
......
...@@ -471,7 +471,7 @@ static int ocfs2_validate_and_adjust_move_goal(struct inode *inode, ...@@ -471,7 +471,7 @@ static int ocfs2_validate_and_adjust_move_goal(struct inode *inode,
int ret, goal_bit = 0; int ret, goal_bit = 0;
struct buffer_head *gd_bh = NULL; struct buffer_head *gd_bh = NULL;
struct ocfs2_group_desc *bg = NULL; struct ocfs2_group_desc *bg;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
int c_to_b = 1 << (osb->s_clustersize_bits - int c_to_b = 1 << (osb->s_clustersize_bits -
inode->i_sb->s_blocksize_bits); inode->i_sb->s_blocksize_bits);
...@@ -481,13 +481,6 @@ static int ocfs2_validate_and_adjust_move_goal(struct inode *inode, ...@@ -481,13 +481,6 @@ static int ocfs2_validate_and_adjust_move_goal(struct inode *inode,
*/ */
range->me_goal = ocfs2_block_to_cluster_start(inode->i_sb, range->me_goal = ocfs2_block_to_cluster_start(inode->i_sb,
range->me_goal); range->me_goal);
/*
* moving goal is not allowd to start with a group desc blok(#0 blk)
* let's compromise to the latter cluster.
*/
if (range->me_goal == le64_to_cpu(bg->bg_blkno))
range->me_goal += c_to_b;
/* /*
* validate goal sits within global_bitmap, and return the victim * validate goal sits within global_bitmap, and return the victim
* group desc * group desc
...@@ -501,6 +494,13 @@ static int ocfs2_validate_and_adjust_move_goal(struct inode *inode, ...@@ -501,6 +494,13 @@ static int ocfs2_validate_and_adjust_move_goal(struct inode *inode,
bg = (struct ocfs2_group_desc *)gd_bh->b_data; bg = (struct ocfs2_group_desc *)gd_bh->b_data;
/*
* moving goal is not allowd to start with a group desc blok(#0 blk)
* let's compromise to the latter cluster.
*/
if (range->me_goal == le64_to_cpu(bg->bg_blkno))
range->me_goal += c_to_b;
/* /*
* movement is not gonna cross two groups. * movement is not gonna cross two groups.
*/ */
...@@ -1057,42 +1057,40 @@ int ocfs2_ioctl_move_extents(struct file *filp, void __user *argp) ...@@ -1057,42 +1057,40 @@ int ocfs2_ioctl_move_extents(struct file *filp, void __user *argp)
struct inode *inode = file_inode(filp); struct inode *inode = file_inode(filp);
struct ocfs2_move_extents range; struct ocfs2_move_extents range;
struct ocfs2_move_extents_context *context = NULL; struct ocfs2_move_extents_context *context;
if (!argp)
return -EINVAL;
status = mnt_want_write_file(filp); status = mnt_want_write_file(filp);
if (status) if (status)
return status; return status;
if ((!S_ISREG(inode->i_mode)) || !(filp->f_mode & FMODE_WRITE)) if ((!S_ISREG(inode->i_mode)) || !(filp->f_mode & FMODE_WRITE))
goto out; goto out_drop;
if (inode->i_flags & (S_IMMUTABLE|S_APPEND)) { if (inode->i_flags & (S_IMMUTABLE|S_APPEND)) {
status = -EPERM; status = -EPERM;
goto out; goto out_drop;
} }
context = kzalloc(sizeof(struct ocfs2_move_extents_context), GFP_NOFS); context = kzalloc(sizeof(struct ocfs2_move_extents_context), GFP_NOFS);
if (!context) { if (!context) {
status = -ENOMEM; status = -ENOMEM;
mlog_errno(status); mlog_errno(status);
goto out; goto out_drop;
} }
context->inode = inode; context->inode = inode;
context->file = filp; context->file = filp;
if (argp) { if (copy_from_user(&range, argp, sizeof(range))) {
if (copy_from_user(&range, argp, sizeof(range))) { status = -EFAULT;
status = -EFAULT; goto out_free;
goto out;
}
} else {
status = -EINVAL;
goto out;
} }
if (range.me_start > i_size_read(inode)) if (range.me_start > i_size_read(inode))
goto out; goto out_free;
if (range.me_start + range.me_len > i_size_read(inode)) if (range.me_start + range.me_len > i_size_read(inode))
range.me_len = i_size_read(inode) - range.me_start; range.me_len = i_size_read(inode) - range.me_start;
...@@ -1124,25 +1122,24 @@ int ocfs2_ioctl_move_extents(struct file *filp, void __user *argp) ...@@ -1124,25 +1122,24 @@ int ocfs2_ioctl_move_extents(struct file *filp, void __user *argp)
status = ocfs2_validate_and_adjust_move_goal(inode, &range); status = ocfs2_validate_and_adjust_move_goal(inode, &range);
if (status) if (status)
goto out; goto out_copy;
} }
status = ocfs2_move_extents(context); status = ocfs2_move_extents(context);
if (status) if (status)
mlog_errno(status); mlog_errno(status);
out: out_copy:
/* /*
* movement/defragmentation may end up being partially completed, * movement/defragmentation may end up being partially completed,
* that's the reason why we need to return userspace the finished * that's the reason why we need to return userspace the finished
* length and new_offset even if failure happens somewhere. * length and new_offset even if failure happens somewhere.
*/ */
if (argp) { if (copy_to_user(argp, &range, sizeof(range)))
if (copy_to_user(argp, &range, sizeof(range))) status = -EFAULT;
status = -EFAULT;
}
out_free:
kfree(context); kfree(context);
out_drop:
mnt_drop_write_file(filp); mnt_drop_write_file(filp);
return status; return status;
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
obj-y += proc.o obj-y += proc.o
proc-y := nommu.o task_nommu.o proc-y := nommu.o task_nommu.o
proc-$(CONFIG_MMU) := mmu.o task_mmu.o proc-$(CONFIG_MMU) := task_mmu.o
proc-y += inode.o root.o base.o generic.o array.o \ proc-y += inode.o root.o base.o generic.o array.o \
fd.o fd.o
......
...@@ -30,24 +30,6 @@ extern int proc_net_init(void); ...@@ -30,24 +30,6 @@ extern int proc_net_init(void);
static inline int proc_net_init(void) { return 0; } static inline int proc_net_init(void) { return 0; }
#endif #endif
struct vmalloc_info {
unsigned long used;
unsigned long largest_chunk;
};
#ifdef CONFIG_MMU
#define VMALLOC_TOTAL (VMALLOC_END - VMALLOC_START)
extern void get_vmalloc_info(struct vmalloc_info *vmi);
#else
#define VMALLOC_TOTAL 0UL
#define get_vmalloc_info(vmi) \
do { \
(vmi)->used = 0; \
(vmi)->largest_chunk = 0; \
} while(0)
#endif
extern int proc_tid_stat(struct seq_file *m, struct pid_namespace *ns, extern int proc_tid_stat(struct seq_file *m, struct pid_namespace *ns,
struct pid *pid, struct task_struct *task); struct pid *pid, struct task_struct *task);
extern int proc_tgid_stat(struct seq_file *m, struct pid_namespace *ns, extern int proc_tgid_stat(struct seq_file *m, struct pid_namespace *ns,
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/capability.h> #include <linux/capability.h>
#include <linux/elf.h> #include <linux/elf.h>
#include <linux/elfcore.h> #include <linux/elfcore.h>
#include <linux/notifier.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/highmem.h> #include <linux/highmem.h>
#include <linux/printk.h> #include <linux/printk.h>
...@@ -564,7 +565,6 @@ static const struct file_operations proc_kcore_operations = { ...@@ -564,7 +565,6 @@ static const struct file_operations proc_kcore_operations = {
.llseek = default_llseek, .llseek = default_llseek,
}; };
#ifdef CONFIG_MEMORY_HOTPLUG
/* just remember that we have to update kcore */ /* just remember that we have to update kcore */
static int __meminit kcore_callback(struct notifier_block *self, static int __meminit kcore_callback(struct notifier_block *self,
unsigned long action, void *arg) unsigned long action, void *arg)
...@@ -578,8 +578,11 @@ static int __meminit kcore_callback(struct notifier_block *self, ...@@ -578,8 +578,11 @@ static int __meminit kcore_callback(struct notifier_block *self,
} }
return NOTIFY_OK; return NOTIFY_OK;
} }
#endif
static struct notifier_block kcore_callback_nb __meminitdata = {
.notifier_call = kcore_callback,
.priority = 0,
};
static struct kcore_list kcore_vmalloc; static struct kcore_list kcore_vmalloc;
...@@ -631,7 +634,7 @@ static int __init proc_kcore_init(void) ...@@ -631,7 +634,7 @@ static int __init proc_kcore_init(void)
add_modules_range(); add_modules_range();
/* Store direct-map area from physical memory map */ /* Store direct-map area from physical memory map */
kcore_update_ram(); kcore_update_ram();
hotplug_memory_notifier(kcore_callback, 0); register_hotmemory_notifier(&kcore_callback_nb);
return 0; return 0;
} }
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include <linux/swap.h> #include <linux/swap.h>
#include <linux/vmstat.h> #include <linux/vmstat.h>
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/vmalloc.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include "internal.h" #include "internal.h"
......
/* mmu.c: mmu memory info files
*
* Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/spinlock.h>
#include <linux/vmalloc.h>
#include <linux/highmem.h>
#include <asm/pgtable.h>
#include "internal.h"
void get_vmalloc_info(struct vmalloc_info *vmi)
{
struct vm_struct *vma;
unsigned long free_area_size;
unsigned long prev_end;
vmi->used = 0;
if (!vmlist) {
vmi->largest_chunk = VMALLOC_TOTAL;
}
else {
vmi->largest_chunk = 0;
prev_end = VMALLOC_START;
read_lock(&vmlist_lock);
for (vma = vmlist; vma; vma = vma->next) {
unsigned long addr = (unsigned long) vma->addr;
/*
* Some archs keep another range for modules in vmlist
*/
if (addr < VMALLOC_START)
continue;
if (addr >= VMALLOC_END)
break;
vmi->used += vma->size;
free_area_size = addr - prev_end;
if (vmi->largest_chunk < free_area_size)
vmi->largest_chunk = free_area_size;
prev_end = vma->size + addr;
}
if (VMALLOC_END - prev_end > vmi->largest_chunk)
vmi->largest_chunk = VMALLOC_END - prev_end;
read_unlock(&vmlist_lock);
}
}
...@@ -128,7 +128,7 @@ EXPORT_SYMBOL(generic_file_llseek_size); ...@@ -128,7 +128,7 @@ EXPORT_SYMBOL(generic_file_llseek_size);
* *
* This is a generic implemenation of ->llseek useable for all normal local * This is a generic implemenation of ->llseek useable for all normal local
* filesystems. It just updates the file offset to the value specified by * filesystems. It just updates the file offset to the value specified by
* @offset and @whence under i_mutex. * @offset and @whence.
*/ */
loff_t generic_file_llseek(struct file *file, loff_t offset, int whence) loff_t generic_file_llseek(struct file *file, loff_t offset, int whence)
{ {
......
#ifndef _ASM_GENERIC_HUGETLB_H
#define _ASM_GENERIC_HUGETLB_H
static inline pte_t mk_huge_pte(struct page *page, pgprot_t pgprot)
{
return mk_pte(page, pgprot);
}
static inline int huge_pte_write(pte_t pte)
{
return pte_write(pte);
}
static inline int huge_pte_dirty(pte_t pte)
{
return pte_dirty(pte);
}
static inline pte_t huge_pte_mkwrite(pte_t pte)
{
return pte_mkwrite(pte);
}
static inline pte_t huge_pte_mkdirty(pte_t pte)
{
return pte_mkdirty(pte);
}
static inline pte_t huge_pte_modify(pte_t pte, pgprot_t newprot)
{
return pte_modify(pte, newprot);
}
static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep)
{
pte_clear(mm, addr, ptep);
}
#endif /* _ASM_GENERIC_HUGETLB_H */
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册