1. 24 6月, 2013 4 次提交
  2. 17 6月, 2013 3 次提交
  3. 06 6月, 2013 2 次提交
    • P
      arch, mm: Remove tlb_fast_mode() · 29eb7782
      Peter Zijlstra 提交于
      Since the introduction of preemptible mmu_gather TLB fast mode has been
      broken. TLB fast mode relies on there being absolutely no concurrency;
      it frees pages first and invalidates TLBs later.
      
      However now we can get concurrency and stuff goes *bang*.
      
      This patch removes all tlb_fast_mode() code; it was found the better
      option vs trying to patch the hole by entangling tlb invalidation with
      the scheduler.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Tony Luck <tony.luck@intel.com>
      Reported-by: NMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      29eb7782
    • W
      ARM: 7747/1: pcpu: ensure __my_cpu_offset cannot be re-ordered across barrier() · 509eb76e
      Will Deacon 提交于
      __my_cpu_offset is non-volatile, since we want its value to be cached
      when we access several per-cpu variables in a row with preemption
      disabled. This means that we rely on preempt_{en,dis}able to hazard
      with the operation via the barrier() macro, so that we can't end up
      migrating CPUs without reloading the per-cpu offset.
      
      Unfortunately, GCC doesn't treat a "memory" clobber on a non-volatile
      asm block as a side-effect, and will happily re-order it before other
      memory clobbers (including those in prempt_disable()) and cache the
      value. This has been observed to break the cmpxchg logic in the slub
      allocator, leading to livelock in kmem_cache_alloc in mainline kernels.
      
      This patch adds a dummy memory input operand to __my_cpu_offset,
      forcing it to be ordered with respect to the barrier() macro.
      
      Cc: <stable@vger.kernel.org>
      Cc: Rob Herring <rob.herring@calxeda.com>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      509eb76e
  4. 23 5月, 2013 1 次提交
  5. 16 5月, 2013 1 次提交
    • A
      ARM: 7705/1: use optimized do_div only for EABI · 049f3e84
      Arnd Bergmann 提交于
      In OABI configurations, some uses of the do_div function
      cause gcc to run out of registers. To work around that,
      we can force the use of the out-of-line version for
      configurations that build a OABI kernel.
      
      Without this patch, building netx_defconfig results in:
      
      net/core/pktgen.c: In function 'pktgen_if_show':
      net/core/pktgen.c:682:2775: error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'
      net/core/pktgen.c:682:3153: error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'
      net/core/pktgen.c:682:2775: error: 'asm' operand has impossible constraints
      net/core/pktgen.c:682:3153: error: 'asm' operand has impossible constraints
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      049f3e84
  6. 14 5月, 2013 2 次提交
  7. 30 4月, 2013 1 次提交
    • C
      arm: set the page table freeing ceiling to TASK_SIZE · 104ad3b3
      Catalin Marinas 提交于
      ARM processors with LPAE enabled use 3 levels of page tables, with an
      entry in the top level (pgd) covering 1GB of virtual space.  Because of
      the branch relocation limitations on ARM, the loadable modules are
      mapped 16MB below PAGE_OFFSET, making the corresponding 1GB pgd shared
      between kernel modules and user space.
      
      If free_pgtables() is called with the default ceiling 0,
      free_pgd_range() (and subsequently called functions) also frees the page
      table shared between user space and kernel modules (which is normally
      handled by the ARM-specific pgd_free() function).  This patch changes
      defines the ARM USER_PGTABLES_CEILING to TASK_SIZE when CONFIG_ARM_LPAE
      is enabled.
      
      Note that the pgd_free() function already checks the presence of the
      shared pmd page allocated by pgd_alloc() and frees it, though with
      ceiling 0 this wasn't necessary.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: <stable@vger.kernel.org>	[3.3+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      104ad3b3
  8. 29 4月, 2013 7 次提交
    • M
      ARM: KVM: promote vfp_host pointer to generic host cpu context · 3de50da6
      Marc Zyngier 提交于
      We use the vfp_host pointer to store the host VFP context, should
      the guest start using VFP itself.
      
      Actually, we can use this pointer in a more generic way to store
      CPU speficic data, and arm64 is using it to dump the whole host
      state before switching to the guest.
      
      Simply rename the vfp_host field to host_cpu_context, and the
      corresponding type to kvm_cpu_context_t. No change in functionnality.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      3de50da6
    • M
      ARM: KVM: add architecture specific hook for capabilities · 17b1e31f
      Marc Zyngier 提交于
      Most of the capabilities are common to both arm and arm64, but
      we still need to handle the exceptions.
      
      Introduce kvm_arch_dev_ioctl_check_extension, which both architectures
      implement (in the 32bit case, it just returns 0).
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      17b1e31f
    • M
      ARM: KVM: perform HYP initilization for hotplugged CPUs · d157f4a5
      Marc Zyngier 提交于
      Now that we have the necessary infrastructure to boot a hotplugged CPU
      at any point in time, wire a CPU notifier that will perform the HYP
      init for the incoming CPU.
      
      Note that this depends on the platform code and/or firmware to boot the
      incoming CPU with HYP mode enabled and return to the kernel by following
      the normal boot path (HYP stub installed).
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      d157f4a5
    • M
      ARM: KVM: switch to a dual-step HYP init code · 5a677ce0
      Marc Zyngier 提交于
      Our HYP init code suffers from two major design issues:
      - it cannot support CPU hotplug, as we tear down the idmap very early
      - it cannot perform a TLB invalidation when switching from init to
        runtime mappings, as pages are manipulated from PL1 exclusively
      
      The hotplug problem mandates that we keep two sets of page tables
      (boot and runtime). The TLB problem mandates that we're able to
      transition from one PGD to another while in HYP, invalidating the TLBs
      in the process.
      
      To be able to do this, we need to share a page between the two page
      tables. A page that will have the same VA in both configurations. All we
      need is a VA that has the following properties:
      - This VA can't be used to represent a kernel mapping.
      - This VA will not conflict with the physical address of the kernel text
      
      The vectors page seems to satisfy this requirement:
      - The kernel never maps anything else there
      - The kernel text being copied at the beginning of the physical memory,
        it is unlikely to use the last 64kB (I doubt we'll ever support KVM
        on a system with something like 4MB of RAM, but patches are very
        welcome).
      
      Let's call this VA the trampoline VA.
      
      Now, we map our init page at 3 locations:
      - idmap in the boot pgd
      - trampoline VA in the boot pgd
      - trampoline VA in the runtime pgd
      
      The init scenario is now the following:
      - We jump in HYP with four parameters: boot HYP pgd, runtime HYP pgd,
        runtime stack, runtime vectors
      - Enable the MMU with the boot pgd
      - Jump to a target into the trampoline page (remember, this is the same
        physical page!)
      - Now switch to the runtime pgd (same VA, and still the same physical
        page!)
      - Invalidate TLBs
      - Set stack and vectors
      - Profit! (or eret, if you only care about the code).
      
      Note that we keep the boot mapping permanently (it is not strictly an
      idmap anymore) to allow for CPU hotplug in later patches.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      5a677ce0
    • M
      ARM: KVM: rework HYP page table freeing · 4f728276
      Marc Zyngier 提交于
      There is no point in freeing HYP page tables differently from Stage-2.
      They now have the same requirements, and should be dealt with the same way.
      
      Promote unmap_stage2_range to be The One True Way, and get rid of a number
      of nasty bugs in the process (good thing we never actually called free_hyp_pmds
      before...).
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      4f728276
    • M
      ARM: KVM: move to a KVM provided HYP idmap · 2fb41059
      Marc Zyngier 提交于
      After the HYP page table rework, it is pretty easy to let the KVM
      code provide its own idmap, rather than expecting the kernel to
      provide it. It takes actually less code to do so.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      2fb41059
    • M
      ARM: KVM: add support for minimal host vs guest profiling · 210552c1
      Marc Zyngier 提交于
      In order to be able to correctly profile what is happening on the
      host, we need to be able to identify when we're running on the guest,
      and log these events differently.
      
      Perf offers a simple way to register callbacks into KVM. Mimic what
      x86 does and enjoy being able to profile your KVM host.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      210552c1
  9. 26 4月, 2013 1 次提交
  10. 25 4月, 2013 1 次提交
    • C
      ARM: 7702/1: Set the page table freeing ceiling to TASK_SIZE · 6aaa189f
      Catalin Marinas 提交于
      ARM processors with LPAE enabled use 3 levels of page tables, with an
      entry in the top level (pgd) covering 1GB of virtual space. Because of
      the branch relocation limitations on ARM, the loadable modules are
      mapped 16MB below PAGE_OFFSET, making the corresponding 1GB pgd shared
      between kernel modules and user space.
      
      If free_pgtables() is called with the default ceiling 0,
      free_pgd_range() (and subsequently called functions) also frees the page
      table shared between user space and kernel modules (which is normally
      handled by the ARM-specific pgd_free() function). This patch changes
      defines the ARM USER_PGTABLES_CEILING to TASK_SIZE when CONFIG_ARM_LPAE
      is enabled.
      
      Note that the pgd_free() function already checks the presence of the
      shared pmd page allocated by pgd_alloc() and frees it, though with
      ceiling 0 this wasn't necessary.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: <stable@vger.kernel.org> # 3.3+
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6aaa189f
  11. 24 4月, 2013 5 次提交
    • N
      ARM: mcpm: provide an interface to set the SMP ops at run time · a7eb7c6f
      Nicolas Pitre 提交于
      This is cleaner than exporting the mcpm_smp_ops structure.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Acked-by: NJon Medhurst <tixy@linaro.org>
      a7eb7c6f
    • D
      ARM: mcpm: introduce helpers for platform coherency exit/setup · 7fe31d28
      Dave Martin 提交于
      This provides helper methods to coordinate between CPUs coming down
      and CPUs going up, as well as documentation on the used algorithms,
      so that cluster teardown and setup
      operations are not done for a cluster simultaneously.
      
      For use in the power_down() implementation:
        * __mcpm_cpu_going_down(unsigned int cluster, unsigned int cpu)
        * __mcpm_outbound_enter_critical(unsigned int cluster)
        * __mcpm_outbound_leave_critical(unsigned int cluster)
        * __mcpm_cpu_down(unsigned int cluster, unsigned int cpu)
      
      The power_up_setup() helper should do platform-specific setup in
      preparation for turning the CPU on, such as invalidating local caches
      or entering coherency.  It must be assembler for now, since it must
      run before the MMU can be switched on.  It is passed the affinity level
      for which initialization should be performed.
      
      Because the mcpm_sync_struct content is looked-up and modified
      with the cache enabled or disabled depending on the code path, it is
      crucial to always ensure proper cache maintenance to update main memory
      right away.  The sync_cache_*() helpers are used to that end.
      
      Also, in order to prevent a cached writer from interfering with an
      adjacent non-cached writer, we ensure each state variable is located to
      a separate cache line.
      
      Thanks to Nicolas Pitre and Achin Gupta for the help with this
      patch.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      7fe31d28
    • N
      ARM: mcpm: introduce the CPU/cluster power API · 7c2b8605
      Nicolas Pitre 提交于
      This is the basic API used to handle the powering up/down of individual
      CPUs in a (multi-)cluster system.  The platform specific backend
      implementation has the responsibility to also handle the cluster level
      power as well when the first/last CPU in a cluster is brought up/down.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      7c2b8605
    • N
      ARM: multi-cluster PM: secondary kernel entry code · e8db288e
      Nicolas Pitre 提交于
      CPUs in cluster based systems, such as big.LITTLE, have special needs
      when entering the kernel due to a hotplug event, or when resuming from
      a deep sleep mode.
      
      This is vectorized so multiple CPUs can enter the kernel in parallel
      without serialization.
      
      The mcpm prefix stands for "multi cluster power management", however
      this is usable on single cluster systems as well.  Only the basic
      structure is introduced here.  This will be extended with later patches.
      
      In order not to complexify things more than they currently have to,
      the planned work to make runtime adjusted MPIDR based indexing and
      dynamic memory allocation for cluster states is postponed to a later
      cycle. The MAX_NR_CLUSTERS and MAX_CPUS_PER_CLUSTER static definitions
      should be sufficient for those systems expected to be available in the
      near future.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      e8db288e
    • N
      ARM: cacheflush: add synchronization helpers for mixed cache state accesses · 0c91e7e0
      Nicolas Pitre 提交于
      Algorithms used by the MCPM layer rely on state variables which are
      accessed while the cache is either active or inactive, depending
      on the code path and the active state.
      
      This patch introduces generic cache maintenance helpers to provide the
      necessary cache synchronization for such state variables to always hit
      main memory in an ordered way.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Acked-by: NDave Martin <dave.martin@linaro.org>
      0c91e7e0
  12. 19 4月, 2013 1 次提交
    • A
      ARM: exynos: move debug-macro.S to include/debug/ · a2e40710
      Arnd Bergmann 提交于
      The move is necessary to support early debug output on exynos
      with multiplatform configurations. This implies also moving the
      plat/debug-macro.S file, but we are leaving the remaining users of that
      file in place, to avoid adding large numbers of extra configuration
      options to Kconfig.debug
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      a2e40710
  13. 17 4月, 2013 2 次提交
    • A
      ARM: 7692/1: iop3xx: move IOP3XX_PERIPHERAL_VIRT_BASE · f5d6a144
      Aaro Koskinen 提交于
      Currently IOP3XX_PERIPHERAL_VIRT_BASE conflicts with PCI_IO_VIRT_BASE:
      
      					address         size
      	PCI_IO_VIRT_BASE                0xfee00000      0x200000
      	IOP3XX_PERIPHERAL_VIRT_BASE     0xfeffe000      0x2000
      
      Fix by moving IOP3XX_PERIPHERAL_VIRT_BASE below PCI_IO_VIRT_BASE.
      
      The patch fixes the following kernel panic with 3.9-rc1 on iop3xx boards:
      
      [    0.000000] Booting Linux on physical CPU 0x0
      [    0.000000] Initializing cgroup subsys cpu
      [    0.000000] Linux version 3.9.0-rc1-iop32x (aaro@blackmetal) (gcc version 4.7.2 (GCC) ) #20 PREEMPT Tue Mar 5 16:44:36 EET 2013
      [    0.000000] bootconsole [earlycon0] enabled
      [    0.000000] ------------[ cut here ]------------
      [    0.000000] kernel BUG at mm/vmalloc.c:1145!
      [    0.000000] Internal error: Oops - BUG: 0 [#1] PREEMPT ARM
      [    0.000000] Modules linked in:
      [    0.000000] CPU: 0    Not tainted  (3.9.0-rc1-iop32x #20)
      [    0.000000] PC is at vm_area_add_early+0x4c/0x88
      [    0.000000] LR is at add_static_vm_early+0x14/0x68
      [    0.000000] pc : [<c03e74a8>]    lr : [<c03e1c40>]    psr: 800000d3
      [    0.000000] sp : c03ffee4  ip : dfffdf88  fp : c03ffef4
      [    0.000000] r10: 00000002  r9 : 000000cf  r8 : 00000653
      [    0.000000] r7 : c040eca8  r6 : c03e2408  r5 : dfffdf60  r4 : 00200000
      [    0.000000] r3 : dfffdfd8  r2 : feffe000  r1 : ff000000  r0 : dfffdf60
      [    0.000000] Flags: Nzcv  IRQs off  FIQs off  Mode SVC_32  ISA ARM  Segment kernel
      [    0.000000] Control: 0000397f  Table: a0004000  DAC: 00000017
      [    0.000000] Process swapper (pid: 0, stack limit = 0xc03fe1b8)
      [    0.000000] Stack: (0xc03ffee4 to 0xc0400000)
      [    0.000000] fee0:          00200000 c03fff0c c03ffef8 c03e1c40 c03e7468 00200000 fee00000
      [    0.000000] ff00: c03fff2c c03fff10 c03e23e4 c03e1c38 feffe000 c0408ee4 ff000000 c0408f04
      [    0.000000] ff20: c03fff3c c03fff30 c03e2434 c03e23b4 c03fff84 c03fff40 c03e2c94 c03e2414
      [    0.000000] ff40: c03f8878 c03f6410 ffff0000 000bffff 00001000 00000008 c03fff84 c03f6410
      [    0.000000] ff60: c04227e8 c03fffd4 a0008000 c03f8878 69052e30 c02f96eb c03fffbc c03fff88
      [    0.000000] ff80: c03e044c c03e268c 00000000 0000397f c0385130 00000001 ffffffff c03f8874
      [    0.000000] ffa0: dfffffff a0004000 69052e30 a03f61a0 c03ffff4 c03fffc0 c03dd5cc c03e0184
      [    0.000000] ffc0: 00000000 00000000 00000000 00000000 00000000 c03f8878 0000397d c040601c
      [    0.000000] ffe0: c03f8874 c0408674 00000000 c03ffff8 a0008040 c03dd558 00000000 00000000
      [    0.000000] Backtrace:
      [    0.000000] [<c03e745c>] (vm_area_add_early+0x0/0x88) from [<c03e1c40>] (add_static_vm_early+0x14/0x68)
      Tested-by: NMikael Pettersson <mikpe@it.uu.se>
      Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      f5d6a144
    • M
      ARM: KVM: fix L_PTE_S2_RDWR to actually be Read/Write · 865499ea
      Marc Zyngier 提交于
      Looks like our L_PTE_S2_RDWR definition is slightly wrong,
      and is actually write only (see ARM ARM Table B3-9, Stage 2 control
      of access permissions). Didn't make a difference for normal pages,
      as we OR the flags together, but I'm still wondering how it worked
      for Stage-2 mapped devices, such as the GIC.
      
      Brown paper bag time, again.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      865499ea
  14. 15 4月, 2013 1 次提交
  15. 12 4月, 2013 2 次提交
    • R
      ARM: timer-sp: convert to use CLKSRC_OF init · 7a0eca71
      Rob Herring 提交于
      This adds CLKSRC_OF based init for sp804 timer. The clock initialization is
      refactored to support retrieving the clock(s) from the DT.
      Signed-off-by: NRob Herring <rob.herring@calxeda.com>
      7a0eca71
    • R
      ARM: convert arm/arm64 arch timer to use CLKSRC_OF init · 0583fe47
      Rob Herring 提交于
      This converts arm and arm64 to use CLKSRC_OF DT based initialization for
      the arch timer. A new function arch_timer_arch_init is added to allow for
      arch specific setup.
      
      This has a side effect of enabling sched_clock on omap5 and exynos5. There
      should not be any reason not to use the arch timers for sched_clock.
      Signed-off-by: NRob Herring <rob.herring@calxeda.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Kukjin Kim <kgene.kim@samsung.com>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Simon Horman <horms@verge.net.au>
      Cc: Magnus Damm <magnus.damm@gmail.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-samsung-soc@vger.kernel.org
      Cc: linux-omap@vger.kernel.org
      Cc: linux-sh@vger.kernel.org
      Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      0583fe47
  16. 11 4月, 2013 2 次提交
  17. 09 4月, 2013 1 次提交
    • T
      ARM: Add interface for registering and calling firmware-specific operations · 7366b92a
      Tomasz Figa 提交于
      Some boards are running with secure firmware running in TrustZone secure
      world, which changes the way some things have to be initialized.
      
      This patch adds an interface for platforms to specify available firmware
      operations and call them.
      
      A wrapper macro, call_firmware_op(), checks if the operation is provided
      and calls it if so, otherwise returns -ENOSYS to allow fallback to
      legacy operation..
      
      By default no operations are provided.
      
      Example of use:
      
      In code using firmware ops:
      
        __raw_writel(virt_to_phys(exynos4_secondary_startup),
                     CPU1_BOOT_REG);
      
        /* Call Exynos specific smc call */
        if (call_firmware_op(cpu_boot, cpu) == -ENOSYS)
                cpu_boot_legacy(...); /* Try legacy way */
      
        gic_raise_softirq(cpumask_of(cpu), 1);
      
      In board-/platform-specific code:
      
        static int platformX_do_idle(void)
        {
                /* tell platformX firmware to enter idle */
                return 0;
        }
      
        static int platformX_cpu_boot(int i)
        {
                /* tell platformX firmware to boot CPU i */
                return 0;
        }
      
        static const struct firmware_ops platformX_firmware_ops = {
                .do_idle      = exynos_do_idle,
                .cpu_boot     = exynos_cpu_boot,
                /* other operations not available on platformX */
        };
      
        static void __init board_init_early(void)
        {
                register_firmware_ops(&platformX_firmware_ops);
        }
      Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
      Signed-off-by: NTomasz Figa <t.figa@samsung.com>
      Signed-off-by: NKukjin Kim <kgene.kim@samsung.com>
      7366b92a
  18. 08 4月, 2013 3 次提交