2019-05-27 08:55:01 +02:00
// SPDX-License-Identifier: GPL-2.0-or-later
2005-04-16 15:20:36 -07:00
/*
2005-10-17 20:10:13 +10:00
* Implementation of various system calls for Linux / PowerPC
2005-04-16 15:20:36 -07:00
*
* Copyright ( C ) 1995 - 1996 Gary Thomas ( gdt @ linuxppc . org )
*
* Derived from " arch/i386/kernel/sys_i386.c "
* Adapted from the i386 version by Gary Thomas
* Modified by Cort Dougan ( cort @ cs . nmt . edu )
* and Paul Mackerras ( paulus @ cs . anu . edu . au ) .
*
* This file contains various random system calls that
* have a non - standard calling sequence on the Linux / PPC
* platform .
*/
# include <linux/errno.h>
# include <linux/sched.h>
# include <linux/syscalls.h>
# include <linux/mm.h>
2007-07-30 02:36:13 +04:00
# include <linux/fs.h>
2005-04-16 15:20:36 -07:00
# include <linux/smp.h>
# include <linux/sem.h>
# include <linux/msg.h>
# include <linux/shm.h>
# include <linux/stat.h>
# include <linux/mman.h>
# include <linux/sys.h>
# include <linux/ipc.h>
# include <linux/utsname.h>
# include <linux/file.h>
# include <linux/personality.h>
2016-12-24 11:46:01 -08:00
# include <linux/uaccess.h>
2006-03-23 00:00:08 +01:00
# include <asm/syscalls.h>
2005-04-16 15:20:36 -07:00
# include <asm/time.h>
# include <asm/unistd.h>
2022-09-21 16:55:53 +10:00
static long do_mmap2 ( unsigned long addr , size_t len ,
unsigned long prot , unsigned long flags ,
unsigned long fd , unsigned long off , int shift )
2005-04-16 15:20:36 -07:00
{
2018-02-21 10:15:49 -07:00
if ( ! arch_validate_prot ( prot , addr ) )
2021-06-25 10:58:33 +00:00
return - EINVAL ;
2008-07-08 00:28:54 +10:00
2021-06-25 10:58:33 +00:00
if ( ! IS_ALIGNED ( off , 1 < < shift ) )
return - EINVAL ;
2005-10-17 20:10:13 +10:00
2021-06-25 10:58:33 +00:00
return ksys_mmap_pgoff ( addr , len , prot , flags , fd , off > > shift ) ;
2005-04-16 15:20:36 -07:00
}
powerpc/tracing: Allow tracing of mmap syscalls
Currently sys_mmap() and sys_mmap2() (32-bit only), are not visible to the
syscall tracing machinery. This means users are not able to see the execution of
mmap() syscalls using the syscall tracer.
Fix that by using SYSCALL_DEFINE6 for sys_mmap() and sys_mmap2() so that the
meta-data associated with these syscalls is visible to the syscall tracer.
A side-effect of this change is that the return type has changed from unsigned
long to long. However this should have no effect, the only code in the kernel
which uses the result of these syscalls is in the syscall return path, which is
written in asm and treats the result as unsigned regardless.
Example output:
cat-3399 [001] .... 196.542410: sys_mmap(addr: 7fff922a0000, len: 20000, prot: 3, flags: 812, fd: 3, offset: 1b0000)
cat-3399 [001] .... 196.542443: sys_mmap -> 0x7fff922a0000
cat-3399 [001] .... 196.542668: sys_munmap(addr: 7fff922c0000, len: 6d2c)
cat-3399 [001] .... 196.542677: sys_munmap -> 0x0
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
[mpe: Massage change log, add detail on return type change]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-12 16:35:19 +10:00
SYSCALL_DEFINE6 ( mmap2 , unsigned long , addr , size_t , len ,
unsigned long , prot , unsigned long , flags ,
unsigned long , fd , unsigned long , pgoff )
2005-10-17 20:10:13 +10:00
{
return do_mmap2 ( addr , len , prot , flags , fd , pgoff , PAGE_SHIFT - 12 ) ;
}
2022-09-21 16:55:53 +10:00
# ifdef CONFIG_COMPAT
COMPAT_SYSCALL_DEFINE6 ( mmap2 ,
unsigned long , addr , size_t , len ,
unsigned long , prot , unsigned long , flags ,
unsigned long , fd , unsigned long , off_4k )
{
return do_mmap2 ( addr , len , prot , flags , fd , off_4k , PAGE_SHIFT - 12 ) ;
}
# endif
powerpc/tracing: Allow tracing of mmap syscalls
Currently sys_mmap() and sys_mmap2() (32-bit only), are not visible to the
syscall tracing machinery. This means users are not able to see the execution of
mmap() syscalls using the syscall tracer.
Fix that by using SYSCALL_DEFINE6 for sys_mmap() and sys_mmap2() so that the
meta-data associated with these syscalls is visible to the syscall tracer.
A side-effect of this change is that the return type has changed from unsigned
long to long. However this should have no effect, the only code in the kernel
which uses the result of these syscalls is in the syscall return path, which is
written in asm and treats the result as unsigned regardless.
Example output:
cat-3399 [001] .... 196.542410: sys_mmap(addr: 7fff922a0000, len: 20000, prot: 3, flags: 812, fd: 3, offset: 1b0000)
cat-3399 [001] .... 196.542443: sys_mmap -> 0x7fff922a0000
cat-3399 [001] .... 196.542668: sys_munmap(addr: 7fff922c0000, len: 6d2c)
cat-3399 [001] .... 196.542677: sys_munmap -> 0x0
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
[mpe: Massage change log, add detail on return type change]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-12 16:35:19 +10:00
SYSCALL_DEFINE6 ( mmap , unsigned long , addr , size_t , len ,
unsigned long , prot , unsigned long , flags ,
unsigned long , fd , off_t , offset )
2005-10-17 20:10:13 +10:00
{
return do_mmap2 ( addr , len , prot , flags , fd , offset , PAGE_SHIFT ) ;
}
# ifdef CONFIG_PPC64
2022-09-21 16:55:54 +10:00
static long do_ppc64_personality ( unsigned long personality )
2005-04-16 15:20:36 -07:00
{
2005-06-08 21:59:15 +10:00
long ret ;
if ( personality ( current - > personality ) = = PER_LINUX32
2012-08-13 03:18:28 +00:00
& & personality ( personality ) = = PER_LINUX )
personality = ( personality & ~ PER_MASK ) | PER_LINUX32 ;
2022-09-21 16:55:52 +10:00
ret = ksys_personality ( personality ) ;
2012-08-13 03:18:28 +00:00
if ( personality ( ret ) = = PER_LINUX32 )
ret = ( ret & ~ PER_MASK ) | PER_LINUX ;
2005-06-08 21:59:15 +10:00
return ret ;
2005-04-16 15:20:36 -07:00
}
2022-09-21 16:55:55 +10:00
SYSCALL_DEFINE1 ( ppc64_personality , unsigned long , personality )
2022-09-21 16:55:54 +10:00
{
return do_ppc64_personality ( personality ) ;
}
2005-10-17 20:10:13 +10:00
2022-09-21 16:55:55 +10:00
# ifdef CONFIG_COMPAT
COMPAT_SYSCALL_DEFINE1 ( ppc64_personality , unsigned long , personality )
{
return do_ppc64_personality ( personality ) ;
}
# endif /* CONFIG_COMPAT */
# endif /* CONFIG_PPC64 */
SYSCALL_DEFINE6 ( ppc_fadvise64_64 ,
int , fd , int , advice , u32 , offset_high , u32 , offset_low ,
u32 , len_high , u32 , len_low )
2005-10-18 14:19:41 +10:00
{
2022-09-21 16:55:48 +10:00
return ksys_fadvise64_64 ( fd , merge_64 ( offset_high , offset_low ) ,
merge_64 ( len_high , len_low ) , advice ) ;
2005-10-18 14:19:41 +10:00
}
powerpc: Add a proper syscall for switching endianness
We currently have a "special" syscall for switching endianness. This is
syscall number 0x1ebe, which is handled explicitly in the 64-bit syscall
exception entry.
That has a few problems, firstly the syscall number is outside of the
usual range, which confuses various tools. For example strace doesn't
recognise the syscall at all.
Secondly it's handled explicitly as a special case in the syscall
exception entry, which is complicated enough without it.
As a first step toward removing the special syscall, we need to add a
regular syscall that implements the same functionality.
The logic is simple, it simply toggles the MSR_LE bit in the userspace
MSR. This is the same as the special syscall, with the caveat that the
special syscall clobbers fewer registers.
This version clobbers r9-r12, XER, CTR, and CR0-1,5-7.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-03-28 21:35:16 +11:00
2019-01-15 17:37:36 +11:00
SYSCALL_DEFINE0 ( switch_endian )
powerpc: Add a proper syscall for switching endianness
We currently have a "special" syscall for switching endianness. This is
syscall number 0x1ebe, which is handled explicitly in the 64-bit syscall
exception entry.
That has a few problems, firstly the syscall number is outside of the
usual range, which confuses various tools. For example strace doesn't
recognise the syscall at all.
Secondly it's handled explicitly as a special case in the syscall
exception entry, which is complicated enough without it.
As a first step toward removing the special syscall, we need to add a
regular syscall that implements the same functionality.
The logic is simple, it simply toggles the MSR_LE bit in the userspace
MSR. This is the same as the special syscall, with the caveat that the
special syscall clobbers fewer registers.
This version clobbers r9-r12, XER, CTR, and CR0-1,5-7.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-03-28 21:35:16 +11:00
{
struct thread_info * ti ;
2021-06-18 01:51:03 +10:00
regs_set_return_msr ( current - > thread . regs ,
current - > thread . regs - > msr ^ MSR_LE ) ;
powerpc: Add a proper syscall for switching endianness
We currently have a "special" syscall for switching endianness. This is
syscall number 0x1ebe, which is handled explicitly in the 64-bit syscall
exception entry.
That has a few problems, firstly the syscall number is outside of the
usual range, which confuses various tools. For example strace doesn't
recognise the syscall at all.
Secondly it's handled explicitly as a special case in the syscall
exception entry, which is complicated enough without it.
As a first step toward removing the special syscall, we need to add a
regular syscall that implements the same functionality.
The logic is simple, it simply toggles the MSR_LE bit in the userspace
MSR. This is the same as the special syscall, with the caveat that the
special syscall clobbers fewer registers.
This version clobbers r9-r12, XER, CTR, and CR0-1,5-7.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-03-28 21:35:16 +11:00
/*
* Set TIF_RESTOREALL so that r3 isn ' t clobbered on return to
* userspace . That also has the effect of restoring the non - volatile
* GPRs , so we saved them on the way in here .
*/
ti = current_thread_info ( ) ;
ti - > flags | = _TIF_RESTOREALL ;
return 0 ;
}