提交 e24dd9ee 编写于 作者: L Linus Torvalds

Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security

Pull security layer updates from James Morris:

 - a major update for AppArmor. From JJ:

     * several bug fixes and cleanups

     * the patch to add symlink support to securityfs that was floated
       on the list earlier and the apparmorfs changes that make use of
       securityfs symlinks

     * it introduces the domain labeling base code that Ubuntu has been
       carrying for several years, with several cleanups applied. And it
       converts the current mediation over to using the domain labeling
       base, which brings domain stacking support with it. This finally
       will bring the base upstream code in line with Ubuntu and provide
       a base to upstream the new feature work that Ubuntu carries.

     * This does _not_ contain any of the newer apparmor mediation
       features/controls (mount, signals, network, keys, ...) that
       Ubuntu is currently carrying, all of which will be RFC'd on top
       of this.

 - Notable also is the Infiniband work in SELinux, and the new file:map
   permission. From Paul:

      "While we're down to 21 patches for v4.13 (it was 31 for v4.12),
       the diffstat jumps up tremendously with over 2k of line changes.

       Almost all of these changes are the SELinux/IB work done by
       Daniel Jurgens; some other noteworthy changes include a NFS v4.2
       labeling fix, a new file:map permission, and reporting of policy
       capabilities on policy load"

   There's also now genfscon labeling support for tracefs, which was
   lost in v4.1 with the separation from debugfs.

 - Smack incorporates a safer socket check in file_receive, and adds a
   cap_capable call in privilege check.

 - TPM as usual has a bunch of fixes and enhancements.

 - Multiple calls to security_add_hooks() can now be made for the same
   LSM, to allow LSMs to have hook declarations across multiple files.

 - IMA now supports different "ima_appraise=" modes (eg. log, fix) from
   the boot command line.

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (126 commits)
  apparmor: put back designators in struct initialisers
  seccomp: Switch from atomic_t to recount_t
  seccomp: Adjust selftests to avoid double-join
  seccomp: Clean up core dump logic
  IMA: update IMA policy documentation to include pcr= option
  ima: Log the same audit cause whenever a file has no signature
  ima: Simplify policy_func_show.
  integrity: Small code improvements
  ima: fix get_binary_runtime_size()
  ima: use ima_parse_buf() to parse template data
  ima: use ima_parse_buf() to parse measurements headers
  ima: introduce ima_parse_buf()
  ima: Add cgroups2 to the defaults list
  ima: use memdup_user_nul
  ima: fix up #endif comments
  IMA: Correct Kconfig dependencies for hash selection
  ima: define is_ima_appraise_enabled()
  ima: define Kconfig IMA_APPRAISE_BOOTPARAM option
  ima: define a set of appraisal rules requiring file signatures
  ima: extend the "ima_policy" boot command line to support multiple policies
  ...
......@@ -34,9 +34,10 @@ Description:
fsuuid:= file system UUID (e.g 8bcbe394-4f13-4144-be8e-5aa9ea2ce2f6)
uid:= decimal value
euid:= decimal value
fowner:=decimal value
fowner:= decimal value
lsm: are LSM specific
option: appraise_type:= [imasig]
pcr:= decimal value
default policy:
# PROC_SUPER_MAGIC
......@@ -96,3 +97,8 @@ Description:
Smack:
measure subj_user=_ func=FILE_CHECK mask=MAY_READ
Example of measure rules using alternate PCRs:
measure func=KEXEC_KERNEL_CHECK pcr=4
measure func=KEXEC_INITRAMFS_CHECK pcr=5
......@@ -1501,12 +1501,21 @@
in crypto/hash_info.h.
ima_policy= [IMA]
The builtin measurement policy to load during IMA
setup. Specyfing "tcb" as the value, measures all
programs exec'd, files mmap'd for exec, and all files
opened with the read mode bit set by either the
effective uid (euid=0) or uid=0.
Format: "tcb"
The builtin policies to load during IMA setup.
Format: "tcb | appraise_tcb | secure_boot"
The "tcb" policy measures all programs exec'd, files
mmap'd for exec, and all files opened with the read
mode bit set by either the effective uid (euid=0) or
uid=0.
The "appraise_tcb" policy appraises the integrity of
all files owned by root. (This is the equivalent
of ima_appraise_tcb.)
The "secure_boot" policy appraises the integrity
of files (eg. kexec kernel image, kernel modules,
firmware, policy, etc) based on file signatures.
ima_tcb [IMA] Deprecated. Use ima_policy= instead.
Load a policy which meets the needs of the Trusted
......
......@@ -127,7 +127,7 @@ static int st33zp24_i2c_acpi_request_resources(struct i2c_client *client)
struct device *dev = &client->dev;
int ret;
ret = acpi_dev_add_driver_gpios(ACPI_COMPANION(dev), acpi_st33zp24_gpios);
ret = devm_acpi_dev_add_driver_gpios(dev, acpi_st33zp24_gpios);
if (ret)
return ret;
......@@ -285,7 +285,6 @@ static int st33zp24_i2c_remove(struct i2c_client *client)
if (ret)
return ret;
acpi_dev_remove_driver_gpios(ACPI_COMPANION(&client->dev));
return 0;
}
......
......@@ -246,7 +246,7 @@ static int st33zp24_spi_acpi_request_resources(struct spi_device *spi_dev)
struct device *dev = &spi_dev->dev;
int ret;
ret = acpi_dev_add_driver_gpios(ACPI_COMPANION(dev), acpi_st33zp24_gpios);
ret = devm_acpi_dev_add_driver_gpios(dev, acpi_st33zp24_gpios);
if (ret)
return ret;
......@@ -402,7 +402,6 @@ static int st33zp24_spi_remove(struct spi_device *dev)
if (ret)
return ret;
acpi_dev_remove_driver_gpios(ACPI_COMPANION(&dev->dev));
return 0;
}
......
......@@ -416,7 +416,8 @@ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
/* Store the decision as chip->locality will be changed. */
need_locality = chip->locality == -1;
if (need_locality && chip->ops->request_locality) {
if (!(flags & TPM_TRANSMIT_RAW) &&
need_locality && chip->ops->request_locality) {
rc = chip->ops->request_locality(chip, 0);
if (rc < 0)
goto out_no_locality;
......@@ -429,8 +430,9 @@ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
rc = chip->ops->send(chip, (u8 *) buf, count);
if (rc < 0) {
dev_err(&chip->dev,
"tpm_transmit: tpm_send: error %d\n", rc);
if (rc != -EPIPE)
dev_err(&chip->dev,
"%s: tpm_send: error %d\n", __func__, rc);
goto out;
}
......@@ -536,59 +538,62 @@ ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_space *space,
return 0;
}
EXPORT_SYMBOL_GPL(tpm_transmit_cmd);
#define TPM_DIGEST_SIZE 20
#define TPM_RET_CODE_IDX 6
#define TPM_INTERNAL_RESULT_SIZE 200
#define TPM_ORD_GET_CAP cpu_to_be32(101)
#define TPM_ORD_GET_RANDOM cpu_to_be32(70)
#define TPM_ORD_GET_CAP 101
#define TPM_ORD_GET_RANDOM 70
static const struct tpm_input_header tpm_getcap_header = {
.tag = TPM_TAG_RQU_COMMAND,
.tag = cpu_to_be16(TPM_TAG_RQU_COMMAND),
.length = cpu_to_be32(22),
.ordinal = TPM_ORD_GET_CAP
.ordinal = cpu_to_be32(TPM_ORD_GET_CAP)
};
ssize_t tpm_getcap(struct tpm_chip *chip, u32 subcap_id, cap_t *cap,
const char *desc, size_t min_cap_length)
{
struct tpm_cmd_t tpm_cmd;
struct tpm_buf buf;
int rc;
tpm_cmd.header.in = tpm_getcap_header;
rc = tpm_buf_init(&buf, TPM_TAG_RQU_COMMAND, TPM_ORD_GET_CAP);
if (rc)
return rc;
if (subcap_id == TPM_CAP_VERSION_1_1 ||
subcap_id == TPM_CAP_VERSION_1_2) {
tpm_cmd.params.getcap_in.cap = cpu_to_be32(subcap_id);
/*subcap field not necessary */
tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(0);
tpm_cmd.header.in.length -= cpu_to_be32(sizeof(__be32));
tpm_buf_append_u32(&buf, subcap_id);
tpm_buf_append_u32(&buf, 0);
} else {
if (subcap_id == TPM_CAP_FLAG_PERM ||
subcap_id == TPM_CAP_FLAG_VOL)
tpm_cmd.params.getcap_in.cap =
cpu_to_be32(TPM_CAP_FLAG);
tpm_buf_append_u32(&buf, TPM_CAP_FLAG);
else
tpm_cmd.params.getcap_in.cap =
cpu_to_be32(TPM_CAP_PROP);
tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
tpm_cmd.params.getcap_in.subcap = cpu_to_be32(subcap_id);
tpm_buf_append_u32(&buf, TPM_CAP_PROP);
tpm_buf_append_u32(&buf, 4);
tpm_buf_append_u32(&buf, subcap_id);
}
rc = tpm_transmit_cmd(chip, NULL, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE,
rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE,
min_cap_length, 0, desc);
if (!rc)
*cap = tpm_cmd.params.getcap_out.cap;
*cap = *(cap_t *)&buf.data[TPM_HEADER_SIZE + 4];
tpm_buf_destroy(&buf);
return rc;
}
EXPORT_SYMBOL_GPL(tpm_getcap);
#define TPM_ORD_STARTUP cpu_to_be32(153)
#define TPM_ORD_STARTUP 153
#define TPM_ST_CLEAR cpu_to_be16(1)
#define TPM_ST_STATE cpu_to_be16(2)
#define TPM_ST_DEACTIVATED cpu_to_be16(3)
static const struct tpm_input_header tpm_startup_header = {
.tag = TPM_TAG_RQU_COMMAND,
.tag = cpu_to_be16(TPM_TAG_RQU_COMMAND),
.length = cpu_to_be32(12),
.ordinal = TPM_ORD_STARTUP
.ordinal = cpu_to_be32(TPM_ORD_STARTUP)
};
static int tpm_startup(struct tpm_chip *chip, __be16 startup_type)
......@@ -737,7 +742,7 @@ EXPORT_SYMBOL_GPL(tpm_get_timeouts);
#define CONTINUE_SELFTEST_RESULT_SIZE 10
static const struct tpm_input_header continue_selftest_header = {
.tag = TPM_TAG_RQU_COMMAND,
.tag = cpu_to_be16(TPM_TAG_RQU_COMMAND),
.length = cpu_to_be32(10),
.ordinal = cpu_to_be32(TPM_ORD_CONTINUE_SELFTEST),
};
......@@ -760,13 +765,13 @@ static int tpm_continue_selftest(struct tpm_chip *chip)
return rc;
}
#define TPM_ORDINAL_PCRREAD cpu_to_be32(21)
#define TPM_ORDINAL_PCRREAD 21
#define READ_PCR_RESULT_SIZE 30
#define READ_PCR_RESULT_BODY_SIZE 20
static const struct tpm_input_header pcrread_header = {
.tag = TPM_TAG_RQU_COMMAND,
.tag = cpu_to_be16(TPM_TAG_RQU_COMMAND),
.length = cpu_to_be32(14),
.ordinal = TPM_ORDINAL_PCRREAD
.ordinal = cpu_to_be32(TPM_ORDINAL_PCRREAD)
};
int tpm_pcr_read_dev(struct tpm_chip *chip, int pcr_idx, u8 *res_buf)
......@@ -838,15 +843,34 @@ int tpm_pcr_read(u32 chip_num, int pcr_idx, u8 *res_buf)
}
EXPORT_SYMBOL_GPL(tpm_pcr_read);
#define TPM_ORD_PCR_EXTEND cpu_to_be32(20)
#define TPM_ORD_PCR_EXTEND 20
#define EXTEND_PCR_RESULT_SIZE 34
#define EXTEND_PCR_RESULT_BODY_SIZE 20
static const struct tpm_input_header pcrextend_header = {
.tag = TPM_TAG_RQU_COMMAND,
.tag = cpu_to_be16(TPM_TAG_RQU_COMMAND),
.length = cpu_to_be32(34),
.ordinal = TPM_ORD_PCR_EXTEND
.ordinal = cpu_to_be32(TPM_ORD_PCR_EXTEND)
};
static int tpm1_pcr_extend(struct tpm_chip *chip, int pcr_idx, const u8 *hash,
char *log_msg)
{
struct tpm_buf buf;
int rc;
rc = tpm_buf_init(&buf, TPM_TAG_RQU_COMMAND, TPM_ORD_PCR_EXTEND);
if (rc)
return rc;
tpm_buf_append_u32(&buf, pcr_idx);
tpm_buf_append(&buf, hash, TPM_DIGEST_SIZE);
rc = tpm_transmit_cmd(chip, NULL, buf.data, EXTEND_PCR_RESULT_SIZE,
EXTEND_PCR_RESULT_BODY_SIZE, 0, log_msg);
tpm_buf_destroy(&buf);
return rc;
}
/**
* tpm_pcr_extend - extend pcr value with hash
* @chip_num: tpm idx # or AN&
......@@ -859,7 +883,6 @@ static const struct tpm_input_header pcrextend_header = {
*/
int tpm_pcr_extend(u32 chip_num, int pcr_idx, const u8 *hash)
{
struct tpm_cmd_t cmd;
int rc;
struct tpm_chip *chip;
struct tpm2_digest digest_list[ARRAY_SIZE(chip->active_banks)];
......@@ -885,13 +908,8 @@ int tpm_pcr_extend(u32 chip_num, int pcr_idx, const u8 *hash)
return rc;
}
cmd.header.in = pcrextend_header;
cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(pcr_idx);
memcpy(cmd.params.pcrextend_in.hash, hash, TPM_DIGEST_SIZE);
rc = tpm_transmit_cmd(chip, NULL, &cmd, EXTEND_PCR_RESULT_SIZE,
EXTEND_PCR_RESULT_BODY_SIZE, 0,
"attempting extend a PCR value");
rc = tpm1_pcr_extend(chip, pcr_idx, hash,
"attempting extend a PCR value");
tpm_put_ops(chip);
return rc;
}
......@@ -1060,13 +1078,13 @@ int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask, unsigned long timeout,
}
EXPORT_SYMBOL_GPL(wait_for_tpm_stat);
#define TPM_ORD_SAVESTATE cpu_to_be32(152)
#define TPM_ORD_SAVESTATE 152
#define SAVESTATE_RESULT_SIZE 10
static const struct tpm_input_header savestate_header = {
.tag = TPM_TAG_RQU_COMMAND,
.tag = cpu_to_be16(TPM_TAG_RQU_COMMAND),
.length = cpu_to_be32(10),
.ordinal = TPM_ORD_SAVESTATE
.ordinal = cpu_to_be32(TPM_ORD_SAVESTATE)
};
/*
......@@ -1090,15 +1108,9 @@ int tpm_pm_suspend(struct device *dev)
}
/* for buggy tpm, flush pcrs with extend to selected dummy */
if (tpm_suspend_pcr) {
cmd.header.in = pcrextend_header;
cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(tpm_suspend_pcr);
memcpy(cmd.params.pcrextend_in.hash, dummy_hash,
TPM_DIGEST_SIZE);
rc = tpm_transmit_cmd(chip, NULL, &cmd, EXTEND_PCR_RESULT_SIZE,
EXTEND_PCR_RESULT_BODY_SIZE, 0,
"extending dummy pcr before suspend");
}
if (tpm_suspend_pcr)
rc = tpm1_pcr_extend(chip, tpm_suspend_pcr, dummy_hash,
"extending dummy pcr before suspend");
/* now do the actual savestate */
for (try = 0; try < TPM_RETRY; try++) {
......@@ -1149,9 +1161,9 @@ EXPORT_SYMBOL_GPL(tpm_pm_resume);
#define TPM_GETRANDOM_RESULT_SIZE 18
static const struct tpm_input_header tpm_getrandom_header = {
.tag = TPM_TAG_RQU_COMMAND,
.tag = cpu_to_be16(TPM_TAG_RQU_COMMAND),
.length = cpu_to_be32(14),
.ordinal = TPM_ORD_GET_RANDOM
.ordinal = cpu_to_be32(TPM_ORD_GET_RANDOM)
};
/**
......
......@@ -22,11 +22,11 @@
#define READ_PUBEK_RESULT_SIZE 314
#define READ_PUBEK_RESULT_MIN_BODY_SIZE (28 + 256)
#define TPM_ORD_READPUBEK cpu_to_be32(124)
#define TPM_ORD_READPUBEK 124
static const struct tpm_input_header tpm_readpubek_header = {
.tag = TPM_TAG_RQU_COMMAND,
.tag = cpu_to_be16(TPM_TAG_RQU_COMMAND),
.length = cpu_to_be32(30),
.ordinal = TPM_ORD_READPUBEK
.ordinal = cpu_to_be32(TPM_ORD_READPUBEK)
};
static ssize_t pubek_show(struct device *dev, struct device_attribute *attr,
char *buf)
......
......@@ -247,7 +247,7 @@ struct tpm_output_header {
__be32 return_code;
} __packed;
#define TPM_TAG_RQU_COMMAND cpu_to_be16(193)
#define TPM_TAG_RQU_COMMAND 193
struct stclear_flags_t {
__be16 tag;
......@@ -339,17 +339,6 @@ enum tpm_sub_capabilities {
TPM_CAP_PROP_TIS_DURATION = 0x120,
};
struct tpm_getcap_params_in {
__be32 cap;
__be32 subcap_size;
__be32 subcap;
} __packed;
struct tpm_getcap_params_out {
__be32 cap_size;
cap_t cap;
} __packed;
struct tpm_readpubek_params_out {
u8 algorithm[4];
u8 encscheme[2];
......@@ -374,11 +363,6 @@ struct tpm_pcrread_in {
__be32 pcr_idx;
} __packed;
struct tpm_pcrextend_in {
__be32 pcr_idx;
u8 hash[TPM_DIGEST_SIZE];
} __packed;
/* 128 bytes is an arbitrary cap. This could be as large as TPM_BUFSIZE - 18
* bytes, but 128 is still a relatively large number of random bytes and
* anything much bigger causes users of struct tpm_cmd_t to start getting
......@@ -399,13 +383,10 @@ struct tpm_startup_in {
} __packed;
typedef union {
struct tpm_getcap_params_out getcap_out;
struct tpm_readpubek_params_out readpubek_out;
u8 readpubek_out_buffer[sizeof(struct tpm_readpubek_params_out)];
struct tpm_getcap_params_in getcap_in;
struct tpm_pcrread_in pcrread_in;
struct tpm_pcrread_out pcrread_out;
struct tpm_pcrextend_in pcrextend_in;
struct tpm_getrandom_in getrandom_in;
struct tpm_getrandom_out getrandom_out;
struct tpm_startup_in startup_in;
......@@ -525,6 +506,7 @@ extern struct idr dev_nums_idr;
enum tpm_transmit_flags {
TPM_TRANSMIT_UNLOCKED = BIT(0),
TPM_TRANSMIT_RAW = BIT(1),
};
ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
......
......@@ -840,7 +840,7 @@ void tpm2_shutdown(struct tpm_chip *chip, u16 shutdown_type)
/* In places where shutdown command is sent there's no much we can do
* except print the error code on a system failure.
*/
if (rc < 0)
if (rc < 0 && rc != -EPIPE)
dev_warn(&chip->dev, "transmit returned %d while stopping the TPM",
rc);
}
......
......@@ -144,13 +144,11 @@ static void atml_plat_remove(void)
struct tpm_chip *chip = dev_get_drvdata(&pdev->dev);
struct tpm_atmel_priv *priv = dev_get_drvdata(&chip->dev);
if (chip) {
tpm_chip_unregister(chip);
if (priv->have_region)
atmel_release_region(priv->base, priv->region_size);
atmel_put_base_addr(priv->iobase);
platform_device_unregister(pdev);
}
tpm_chip_unregister(chip);
if (priv->have_region)
atmel_release_region(priv->base, priv->region_size);
atmel_put_base_addr(priv->iobase);
platform_device_unregister(pdev);
}
static SIMPLE_DEV_PM_OPS(tpm_atml_pm, tpm_pm_suspend, tpm_pm_resume);
......
......@@ -70,6 +70,7 @@ struct tpm_inf_dev {
u8 buf[TPM_BUFSIZE + sizeof(u8)]; /* max. buffer size + addr */
struct tpm_chip *chip;
enum i2c_chip_type chip_type;
unsigned int adapterlimit;
};
static struct tpm_inf_dev tpm_dev;
......@@ -111,6 +112,7 @@ static int iic_tpm_read(u8 addr, u8 *buffer, size_t len)
int rc = 0;
int count;
unsigned int msglen = len;
/* Lock the adapter for the duration of the whole sequence. */
if (!tpm_dev.client->adapter->algo->master_xfer)
......@@ -131,27 +133,61 @@ static int iic_tpm_read(u8 addr, u8 *buffer, size_t len)
usleep_range(SLEEP_DURATION_LOW, SLEEP_DURATION_HI);
}
} else {
/* slb9635 protocol should work in all cases */
for (count = 0; count < MAX_COUNT; count++) {
rc = __i2c_transfer(tpm_dev.client->adapter, &msg1, 1);
if (rc > 0)
break; /* break here to skip sleep */
usleep_range(SLEEP_DURATION_LOW, SLEEP_DURATION_HI);
}
if (rc <= 0)
goto out;
/* After the TPM has successfully received the register address
* it needs some time, thus we're sleeping here again, before
* retrieving the data
/* Expect to send one command message and one data message, but
* support looping over each or both if necessary.
*/
for (count = 0; count < MAX_COUNT; count++) {
usleep_range(SLEEP_DURATION_LOW, SLEEP_DURATION_HI);
rc = __i2c_transfer(tpm_dev.client->adapter, &msg2, 1);
if (rc > 0)
break;
while (len > 0) {
/* slb9635 protocol should work in all cases */
for (count = 0; count < MAX_COUNT; count++) {
rc = __i2c_transfer(tpm_dev.client->adapter,
&msg1, 1);
if (rc > 0)
break; /* break here to skip sleep */
usleep_range(SLEEP_DURATION_LOW,
SLEEP_DURATION_HI);
}
if (rc <= 0)
goto out;
/* After the TPM has successfully received the register
* address it needs some time, thus we're sleeping here
* again, before retrieving the data
*/
for (count = 0; count < MAX_COUNT; count++) {
if (tpm_dev.adapterlimit) {
msglen = min_t(unsigned int,
tpm_dev.adapterlimit,
len);
msg2.len = msglen;
}
usleep_range(SLEEP_DURATION_LOW,
SLEEP_DURATION_HI);
rc = __i2c_transfer(tpm_dev.client->adapter,
&msg2, 1);
if (rc > 0) {
/* Since len is unsigned, make doubly
* sure we do not underflow it.
*/
if (msglen > len)
len = 0;
else
len -= msglen;
msg2.buf += msglen;
break;
}
/* If the I2C adapter rejected the request (e.g
* when the quirk read_max_len < len) fall back
* to a sane minimum value and try again.
*/
if (rc == -EOPNOTSUPP)
tpm_dev.adapterlimit =
I2C_SMBUS_BLOCK_MAX;
}
if (rc <= 0)
goto out;
}
}
......
......@@ -397,7 +397,7 @@ static int tpm_inf_pnp_probe(struct pnp_dev *dev,
int vendorid[2];
int version[2];
int productid[2];
char chipname[20];
const char *chipname;
struct tpm_chip *chip;
/* read IO-ports through PnP */
......@@ -488,13 +488,13 @@ static int tpm_inf_pnp_probe(struct pnp_dev *dev,
switch ((productid[0] << 8) | productid[1]) {
case 6:
snprintf(chipname, sizeof(chipname), " (SLD 9630 TT 1.1)");
chipname = " (SLD 9630 TT 1.1)";
break;
case 11:
snprintf(chipname, sizeof(chipname), " (SLB 9635 TT 1.2)");
chipname = " (SLB 9635 TT 1.2)";
break;
default:
snprintf(chipname, sizeof(chipname), " (unknown chip)");
chipname = " (unknown chip)";
break;
}
......
......@@ -80,6 +80,8 @@ static int has_hid(struct acpi_device *dev, const char *hid)
static inline int is_itpm(struct acpi_device *dev)
{
if (!dev)
return 0;
return has_hid(dev, "INTC0102");
}
#else
......@@ -89,6 +91,47 @@ static inline int is_itpm(struct acpi_device *dev)
}
#endif
#if defined(CONFIG_ACPI)
#define DEVICE_IS_TPM2 1
static const struct acpi_device_id tpm_acpi_tbl[] = {
{"MSFT0101", DEVICE_IS_TPM2},
{},
};
MODULE_DEVICE_TABLE(acpi, tpm_acpi_tbl);
static int check_acpi_tpm2(struct device *dev)
{
const struct acpi_device_id *aid = acpi_match_device(tpm_acpi_tbl, dev);
struct acpi_table_tpm2 *tbl;
acpi_status st;
if (!aid || aid->driver_data != DEVICE_IS_TPM2)
return 0;
/* If the ACPI TPM2 signature is matched then a global ACPI_SIG_TPM2
* table is mandatory
*/
st =
acpi_get_table(ACPI_SIG_TPM2, 1, (struct acpi_table_header **)&tbl);
if (ACPI_FAILURE(st) || tbl->header.length < sizeof(*tbl)) {
dev_err(dev, FW_BUG "failed to get TPM2 ACPI table\n");
return -EINVAL;
}
/* The tpm2_crb driver handles this device */
if (tbl->start_method != ACPI_TPM2_MEMORY_MAPPED)
return -ENODEV;
return 0;
}
#else
static int check_acpi_tpm2(struct device *dev)
{
return 0;
}
#endif
static int tpm_tcg_read_bytes(struct tpm_tis_data *data, u32 addr, u16 len,
u8 *result)
{
......@@ -141,11 +184,15 @@ static const struct tpm_tis_phy_ops tpm_tcg = {
.write32 = tpm_tcg_write32,
};
static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info,
acpi_handle acpi_dev_handle)
static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info)
{
struct tpm_tis_tcg_phy *phy;
int irq = -1;
int rc;
rc = check_acpi_tpm2(dev);
if (rc)
return rc;
phy = devm_kzalloc(dev, sizeof(struct tpm_tis_tcg_phy), GFP_KERNEL);
if (phy == NULL)
......@@ -158,11 +205,11 @@ static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info,
if (interrupts)
irq = tpm_info->irq;
if (itpm)
if (itpm || is_itpm(ACPI_COMPANION(dev)))
phy->priv.flags |= TPM_TIS_ITPM_WORKAROUND;
return tpm_tis_core_init(dev, &phy->priv, irq, &tpm_tcg,
acpi_dev_handle);
ACPI_HANDLE(dev));
}
static SIMPLE_DEV_PM_OPS(tpm_tis_pm, tpm_pm_suspend, tpm_tis_resume);
......@@ -171,7 +218,6 @@ static int tpm_tis_pnp_init(struct pnp_dev *pnp_dev,
const struct pnp_device_id *pnp_id)
{
struct tpm_info tpm_info = {};
acpi_handle acpi_dev_handle = NULL;
struct resource *res;
res = pnp_get_resource(pnp_dev, IORESOURCE_MEM, 0);
......@@ -184,14 +230,7 @@ static int tpm_tis_pnp_init(struct pnp_dev *pnp_dev,
else
tpm_info.irq = -1;
if (pnp_acpi_device(pnp_dev)) {
if (is_itpm(pnp_acpi_device(pnp_dev)))
itpm = true;
acpi_dev_handle = ACPI_HANDLE(&pnp_dev->dev);
}
return tpm_tis_init(&pnp_dev->dev, &tpm_info, acpi_dev_handle);
return tpm_tis_init(&pnp_dev->dev, &tpm_info);
}
static struct pnp_device_id tpm_pnp_tbl[] = {
......@@ -231,93 +270,6 @@ module_param_string(hid, tpm_pnp_tbl[TIS_HID_USR_IDX].id,
sizeof(tpm_pnp_tbl[TIS_HID_USR_IDX].id), 0444);
MODULE_PARM_DESC(hid, "Set additional specific HID for this driver to probe");
#ifdef CONFIG_ACPI
static int tpm_check_resource(struct acpi_resource *ares, void *data)
{
struct tpm_info *tpm_info = (struct tpm_info *) data;
struct resource res;
if (acpi_dev_resource_interrupt(ares, 0, &res))
tpm_info->irq = res.start;
else if (acpi_dev_resource_memory(ares, &res)) {
tpm_info->res = res;
tpm_info->res.name = NULL;
}
return 1;
}
static int tpm_tis_acpi_init(struct acpi_device *acpi_dev)
{
struct acpi_table_tpm2 *tbl;
acpi_status st;
struct list_head resources;
struct tpm_info tpm_info = {};
int ret;
st = acpi_get_table(ACPI_SIG_TPM2, 1,
(struct acpi_table_header **) &tbl);
if (ACPI_FAILURE(st) || tbl->header.length < sizeof(*tbl)) {
dev_err(&acpi_dev->dev,
FW_BUG "failed to get TPM2 ACPI table\n");
return -EINVAL;
}
if (tbl->start_method != ACPI_TPM2_MEMORY_MAPPED)
return -ENODEV;
INIT_LIST_HEAD(&resources);
tpm_info.irq = -1;
ret = acpi_dev_get_resources(acpi_dev, &resources, tpm_check_resource,
&tpm_info);
if (ret < 0)
return ret;
acpi_dev_free_resource_list(&resources);
if (resource_type(&tpm_info.res) != IORESOURCE_MEM) {
dev_err(&acpi_dev->dev,
FW_BUG "TPM2 ACPI table does not define a memory resource\n");
return -EINVAL;
}
if (is_itpm(acpi_dev))
itpm = true;
return tpm_tis_init(&acpi_dev->dev, &tpm_info, acpi_dev->handle);
}
static int tpm_tis_acpi_remove(struct acpi_device *dev)
{
struct tpm_chip *chip = dev_get_drvdata(&dev->dev);
tpm_chip_unregister(chip);
tpm_tis_remove(chip);
return 0;
}
static struct acpi_device_id tpm_acpi_tbl[] = {
{"MSFT0101", 0}, /* TPM 2.0 */
/* Add new here */
{"", 0}, /* User Specified */
{"", 0} /* Terminator */
};
MODULE_DEVICE_TABLE(acpi, tpm_acpi_tbl);
static struct acpi_driver tis_acpi_driver = {
.name = "tpm_tis",
.ids = tpm_acpi_tbl,
.ops = {
.add = tpm_tis_acpi_init,
.remove = tpm_tis_acpi_remove,
},
.drv = {
.pm = &tpm_tis_pm,
},
};
#endif
static struct platform_device *force_pdev;
static int tpm_tis_plat_probe(struct platform_device *pdev)
......@@ -332,18 +284,16 @@ static int tpm_tis_plat_probe(struct platform_device *pdev)
}
tpm_info.res = *res;
res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
if (res) {
tpm_info.irq = res->start;
} else {
if (pdev == force_pdev)
tpm_info.irq = platform_get_irq(pdev, 0);
if (tpm_info.irq <= 0) {
if (pdev != force_pdev)
tpm_info.irq = -1;
else
/* When forcing auto probe the IRQ */
tpm_info.irq = 0;
}
return tpm_tis_init(&pdev->dev, &tpm_info, NULL);
return tpm_tis_init(&pdev->dev, &tpm_info);
}
static int tpm_tis_plat_remove(struct platform_device *pdev)
......@@ -371,6 +321,7 @@ static struct platform_driver tis_drv = {
.name = "tpm_tis",
.pm = &tpm_tis_pm,
.of_match_table = of_match_ptr(tis_of_platform_match),
.acpi_match_table = ACPI_PTR(tpm_acpi_tbl),
},
};
......@@ -413,11 +364,6 @@ static int __init init_tis(void)
if (rc)
goto err_platform;
#ifdef CONFIG_ACPI
rc = acpi_bus_register_driver(&tis_acpi_driver);
if (rc)
goto err_acpi;
#endif
if (IS_ENABLED(CONFIG_PNP)) {
rc = pnp_register_driver(&tis_pnp_driver);
......@@ -428,10 +374,6 @@ static int __init init_tis(void)
return 0;
err_pnp:
#ifdef CONFIG_ACPI
acpi_bus_unregister_driver(&tis_acpi_driver);
err_acpi:
#endif
platform_driver_unregister(&tis_drv);
err_platform:
if (force_pdev)
......@@ -443,9 +385,6 @@ static int __init init_tis(void)
static void __exit cleanup_tis(void)
{
pnp_unregister_driver(&tis_pnp_driver);
#ifdef CONFIG_ACPI
acpi_bus_unregister_driver(&tis_acpi_driver);
#endif
platform_driver_unregister(&tis_drv);
if (force_pdev)
......
......@@ -43,6 +43,7 @@ struct proxy_dev {
#define STATE_OPENED_FLAG BIT(0)
#define STATE_WAIT_RESPONSE_FLAG BIT(1) /* waiting for emulator response */
#define STATE_REGISTERED_FLAG BIT(2)
#define STATE_DRIVER_COMMAND BIT(3) /* sending a driver specific command */
size_t req_len; /* length of queued TPM request */
size_t resp_len; /* length of queued TPM response */
......@@ -299,6 +300,28 @@ static int vtpm_proxy_tpm_op_recv(struct tpm_chip *chip, u8 *buf, size_t count)
return len;
}
static int vtpm_proxy_is_driver_command(struct tpm_chip *chip,
u8 *buf, size_t count)
{
struct tpm_input_header *hdr = (struct tpm_input_header *)buf;
if (count < sizeof(struct tpm_input_header))
return 0;
if (chip->flags & TPM_CHIP_FLAG_TPM2) {
switch (be32_to_cpu(hdr->ordinal)) {
case TPM2_CC_SET_LOCALITY:
return 1;
}
} else {
switch (be32_to_cpu(hdr->ordinal)) {
case TPM_ORD_SET_LOCALITY:
return 1;
}
}
return 0;
}
/*
* Called when core TPM driver forwards TPM requests to 'server side'.
*
......@@ -321,6 +344,10 @@ static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count)
return -EIO;
}
if (!(proxy_dev->state & STATE_DRIVER_COMMAND) &&
vtpm_proxy_is_driver_command(chip, buf, count))
return -EFAULT;
mutex_lock(&proxy_dev->buf_lock);
if (!(proxy_dev->state & STATE_OPENED_FLAG)) {
......@@ -371,6 +398,47 @@ static bool vtpm_proxy_tpm_req_canceled(struct tpm_chip *chip, u8 status)
return ret;
}
static int vtpm_proxy_request_locality(struct tpm_chip *chip, int locality)
{
struct tpm_buf buf;
int rc;
const struct tpm_output_header *header;
struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev);
if (chip->flags & TPM_CHIP_FLAG_TPM2)
rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS,
TPM2_CC_SET_LOCALITY);
else
rc = tpm_buf_init(&buf, TPM_TAG_RQU_COMMAND,
TPM_ORD_SET_LOCALITY);
if (rc)
return rc;
tpm_buf_append_u8(&buf, locality);
proxy_dev->state |= STATE_DRIVER_COMMAND;
rc = tpm_transmit_cmd(chip, NULL, buf.data, tpm_buf_length(&buf), 0,
TPM_TRANSMIT_UNLOCKED | TPM_TRANSMIT_RAW,
"attempting to set locality");
proxy_dev->state &= ~STATE_DRIVER_COMMAND;
if (rc < 0) {
locality = rc;
goto out;
}
header = (const struct tpm_output_header *)buf.data;
rc = be32_to_cpu(header->return_code);
if (rc)
locality = -1;
out:
tpm_buf_destroy(&buf);
return locality;
}
static const struct tpm_class_ops vtpm_proxy_tpm_ops = {
.flags = TPM_OPS_AUTO_STARTUP,
.recv = vtpm_proxy_tpm_op_recv,
......@@ -380,6 +448,7 @@ static const struct tpm_class_ops vtpm_proxy_tpm_ops = {
.req_complete_mask = VTPM_PROXY_REQ_COMPLETE_FLAG,
.req_complete_val = VTPM_PROXY_REQ_COMPLETE_FLAG,
.req_canceled = vtpm_proxy_tpm_req_canceled,
.request_locality = vtpm_proxy_request_locality,
};
/*
......
......@@ -45,7 +45,7 @@ static int tpmrm_release(struct inode *inode, struct file *file)
return 0;
}
ssize_t tpmrm_write(struct file *file, const char __user *buf,
static ssize_t tpmrm_write(struct file *file, const char __user *buf,
size_t size, loff_t *off)
{
struct file_priv *fpriv = file->private_data;
......
......@@ -10,7 +10,8 @@ obj-$(CONFIG_INFINIBAND_USER_ACCESS) += ib_uverbs.o ib_ucm.o \
ib_core-y := packer.o ud_header.o verbs.o cq.o rw.o sysfs.o \
device.o fmr_pool.o cache.o netlink.o \
roce_gid_mgmt.o mr_pool.o addr.o sa_query.o \
multicast.o mad.o smi.o agent.o mad_rmpp.o
multicast.o mad.o smi.o agent.o mad_rmpp.o \
security.o
ib_core-$(CONFIG_INFINIBAND_USER_MEM) += umem.o
ib_core-$(CONFIG_INFINIBAND_ON_DEMAND_PAGING) += umem_odp.o umem_rbtree.o
ib_core-$(CONFIG_CGROUP_RDMA) += cgroup.o
......
......@@ -53,6 +53,7 @@ struct ib_update_work {
struct work_struct work;
struct ib_device *device;
u8 port_num;
bool enforce_security;
};
union ib_gid zgid;
......@@ -911,6 +912,26 @@ int ib_get_cached_pkey(struct ib_device *device,
}
EXPORT_SYMBOL(ib_get_cached_pkey);
int ib_get_cached_subnet_prefix(struct ib_device *device,
u8 port_num,
u64 *sn_pfx)
{
unsigned long flags;
int p;
if (port_num < rdma_start_port(device) ||
port_num > rdma_end_port(device))
return -EINVAL;
p = port_num - rdma_start_port(device);
read_lock_irqsave(&device->cache.lock, flags);
*sn_pfx = device->cache.ports[p].subnet_prefix;
read_unlock_irqrestore(&device->cache.lock, flags);
return 0;
}
EXPORT_SYMBOL(ib_get_cached_subnet_prefix);
int ib_find_cached_pkey(struct ib_device *device,
u8 port_num,
u16 pkey,
......@@ -1022,7 +1043,8 @@ int ib_get_cached_port_state(struct ib_device *device,
EXPORT_SYMBOL(ib_get_cached_port_state);
static void ib_cache_update(struct ib_device *device,
u8 port)
u8 port,
bool enforce_security)
{
struct ib_port_attr *tprops = NULL;
struct ib_pkey_cache *pkey_cache = NULL, *old_pkey_cache;
......@@ -1108,8 +1130,15 @@ static void ib_cache_update(struct ib_device *device,
device->cache.ports[port - rdma_start_port(device)].port_state =
tprops->state;
device->cache.ports[port - rdma_start_port(device)].subnet_prefix =
tprops->subnet_prefix;
write_unlock_irq(&device->cache.lock);
if (enforce_security)
ib_security_cache_change(device,
port,
tprops->subnet_prefix);
kfree(gid_cache);
kfree(old_pkey_cache);
kfree(tprops);
......@@ -1126,7 +1155,9 @@ static void ib_cache_task(struct work_struct *_work)
struct ib_update_work *work =
container_of(_work, struct ib_update_work, work);
ib_cache_update(work->device, work->port_num);
ib_cache_update(work->device,
work->port_num,
work->enforce_security);
kfree(work);
}
......@@ -1147,6 +1178,12 @@ static void ib_cache_event(struct ib_event_handler *handler,
INIT_WORK(&work->work, ib_cache_task);
work->device = event->device;
work->port_num = event->element.port_num;
if (event->event == IB_EVENT_PKEY_CHANGE ||
event->event == IB_EVENT_GID_CHANGE)
work->enforce_security = true;
else
work->enforce_security = false;
queue_work(ib_wq, &work->work);
}
}
......@@ -1172,7 +1209,7 @@ int ib_cache_setup_one(struct ib_device *device)
goto out;
for (p = 0; p <= rdma_end_port(device) - rdma_start_port(device); ++p)
ib_cache_update(device, p + rdma_start_port(device));
ib_cache_update(device, p + rdma_start_port(device), true);
INIT_IB_EVENT_HANDLER(&device->cache.event_handler,
device, ib_cache_event);
......
......@@ -38,6 +38,16 @@
#include <linux/cgroup_rdma.h>
#include <rdma/ib_verbs.h>
#include <rdma/ib_mad.h>
#include "mad_priv.h"
struct pkey_index_qp_list {
struct list_head pkey_index_list;
u16 pkey_index;
/* Lock to hold while iterating the qp_list. */
spinlock_t qp_list_lock;
struct list_head qp_list;
};
#if IS_ENABLED(CONFIG_INFINIBAND_ADDR_TRANS_CONFIGFS)
int cma_configfs_init(void);
......@@ -186,4 +196,109 @@ int ib_nl_handle_set_timeout(struct sk_buff *skb,
int ib_nl_handle_ip_res_resp(struct sk_buff *skb,
struct netlink_callback *cb);
int ib_get_cached_subnet_prefix(struct ib_device *device,
u8 port_num,
u64 *sn_pfx);
#ifdef CONFIG_SECURITY_INFINIBAND
int ib_security_pkey_access(struct ib_device *dev,
u8 port_num,
u16 pkey_index,
void *sec);
void ib_security_destroy_port_pkey_list(struct ib_device *device);
void ib_security_cache_change(struct ib_device *device,
u8 port_num,
u64 subnet_prefix);
int ib_security_modify_qp(struct ib_qp *qp,
struct ib_qp_attr *qp_attr,
int qp_attr_mask,
struct ib_udata *udata);
int ib_create_qp_security(struct ib_qp *qp, struct ib_device *dev);
void ib_destroy_qp_security_begin(struct ib_qp_security *sec);
void ib_destroy_qp_security_abort(struct ib_qp_security *sec);
void ib_destroy_qp_security_end(struct ib_qp_security *sec);
int ib_open_shared_qp_security(struct ib_qp *qp, struct ib_device *dev);
void ib_close_shared_qp_security(struct ib_qp_security *sec);
int ib_mad_agent_security_setup(struct ib_mad_agent *agent,
enum ib_qp_type qp_type);
void ib_mad_agent_security_cleanup(struct ib_mad_agent *agent);
int ib_mad_enforce_security(struct ib_mad_agent_private *map, u16 pkey_index);
#else
static inline int ib_security_pkey_access(struct ib_device *dev,
u8 port_num,
u16 pkey_index,
void *sec)
{
return 0;
}
static inline void ib_security_destroy_port_pkey_list(struct ib_device *device)
{
}
static inline void ib_security_cache_change(struct ib_device *device,
u8 port_num,
u64 subnet_prefix)
{
}
static inline int ib_security_modify_qp(struct ib_qp *qp,
struct ib_qp_attr *qp_attr,
int qp_attr_mask,
struct ib_udata *udata)
{
return qp->device->modify_qp(qp->real_qp,
qp_attr,
qp_attr_mask,
udata);
}
static inline int ib_create_qp_security(struct ib_qp *qp,
struct ib_device *dev)
{
return 0;
}
static inline void ib_destroy_qp_security_begin(struct ib_qp_security *sec)
{
}
static inline void ib_destroy_qp_security_abort(struct ib_qp_security *sec)
{
}
static inline void ib_destroy_qp_security_end(struct ib_qp_security *sec)
{
}
static inline int ib_open_shared_qp_security(struct ib_qp *qp,
struct ib_device *dev)
{
return 0;
}
static inline void ib_close_shared_qp_security(struct ib_qp_security *sec)
{
}
static inline int ib_mad_agent_security_setup(struct ib_mad_agent *agent,
enum ib_qp_type qp_type)
{
return 0;
}
static inline void ib_mad_agent_security_cleanup(struct ib_mad_agent *agent)
{
}
static inline int ib_mad_enforce_security(struct ib_mad_agent_private *map,
u16 pkey_index)
{
return 0;
}
#endif
#endif /* _CORE_PRIV_H */
......@@ -39,6 +39,8 @@
#include <linux/init.h>
#include <linux/mutex.h>
#include <linux/netdevice.h>
#include <linux/security.h>
#include <linux/notifier.h>
#include <rdma/rdma_netlink.h>
#include <rdma/ib_addr.h>
#include <rdma/ib_cache.h>
......@@ -82,6 +84,14 @@ static LIST_HEAD(client_list);
static DEFINE_MUTEX(device_mutex);
static DECLARE_RWSEM(lists_rwsem);
static int ib_security_change(struct notifier_block *nb, unsigned long event,
void *lsm_data);
static void ib_policy_change_task(struct work_struct *work);
static DECLARE_WORK(ib_policy_change_work, ib_policy_change_task);
static struct notifier_block ibdev_lsm_nb = {
.notifier_call = ib_security_change,
};
static int ib_device_check_mandatory(struct ib_device *device)
{
......@@ -325,6 +335,64 @@ void ib_get_device_fw_str(struct ib_device *dev, char *str, size_t str_len)
}
EXPORT_SYMBOL(ib_get_device_fw_str);
static int setup_port_pkey_list(struct ib_device *device)
{
int i;
/**
* device->port_pkey_list is indexed directly by the port number,
* Therefore it is declared as a 1 based array with potential empty
* slots at the beginning.
*/
device->port_pkey_list = kcalloc(rdma_end_port(device) + 1,
sizeof(*device->port_pkey_list),
GFP_KERNEL);
if (!device->port_pkey_list)
return -ENOMEM;
for (i = 0; i < (rdma_end_port(device) + 1); i++) {
spin_lock_init(&device->port_pkey_list[i].list_lock);
INIT_LIST_HEAD(&device->port_pkey_list[i].pkey_list);
}
return 0;
}
static void ib_policy_change_task(struct work_struct *work)
{
struct ib_device *dev;
down_read(&lists_rwsem);
list_for_each_entry(dev, &device_list, core_list) {
int i;
for (i = rdma_start_port(dev); i <= rdma_end_port(dev); i++) {
u64 sp;
int ret = ib_get_cached_subnet_prefix(dev,
i,
&sp);
WARN_ONCE(ret,
"ib_get_cached_subnet_prefix err: %d, this should never happen here\n",
ret);
ib_security_cache_change(dev, i, sp);
}
}
up_read(&lists_rwsem);
}
static int ib_security_change(struct notifier_block *nb, unsigned long event,
void *lsm_data)
{
if (event != LSM_POLICY_CHANGE)
return NOTIFY_DONE;
schedule_work(&ib_policy_change_work);
return NOTIFY_OK;
}
/**
* ib_register_device - Register an IB device with IB core
* @device:Device to register
......@@ -385,6 +453,12 @@ int ib_register_device(struct ib_device *device,
goto out;
}
ret = setup_port_pkey_list(device);
if (ret) {
pr_warn("Couldn't create per port_pkey_list\n");
goto out;
}
ret = ib_cache_setup_one(device);
if (ret) {
pr_warn("Couldn't set up InfiniBand P_Key/GID cache\n");
......@@ -468,6 +542,9 @@ void ib_unregister_device(struct ib_device *device)
ib_device_unregister_sysfs(device);
ib_cache_cleanup_one(device);
ib_security_destroy_port_pkey_list(device);
kfree(device->port_pkey_list);
down_write(&lists_rwsem);
spin_lock_irqsave(&device->client_data_lock, flags);
list_for_each_entry_safe(context, tmp, &device->client_data_list, list)
......@@ -1082,10 +1159,18 @@ static int __init ib_core_init(void)
goto err_sa;
}
ret = register_lsm_notifier(&ibdev_lsm_nb);
if (ret) {
pr_warn("Couldn't register LSM notifier. ret %d\n", ret);
goto err_ibnl_clients;
}
ib_cache_setup();
return 0;
err_ibnl_clients:
ib_remove_ibnl_clients();
err_sa:
ib_sa_cleanup();
err_mad:
......@@ -1105,6 +1190,7 @@ static int __init ib_core_init(void)
static void __exit ib_core_cleanup(void)
{
unregister_lsm_notifier(&ibdev_lsm_nb);
ib_cache_cleanup();
ib_remove_ibnl_clients();
ib_sa_cleanup();
......
......@@ -40,9 +40,11 @@
#include <linux/dma-mapping.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/security.h>
#include <rdma/ib_cache.h>
#include "mad_priv.h"
#include "core_priv.h"
#include "mad_rmpp.h"
#include "smi.h"
#include "opa_smi.h"
......@@ -369,6 +371,12 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device,
atomic_set(&mad_agent_priv->refcount, 1);
init_completion(&mad_agent_priv->comp);
ret2 = ib_mad_agent_security_setup(&mad_agent_priv->agent, qp_type);
if (ret2) {
ret = ERR_PTR(ret2);
goto error4;
}
spin_lock_irqsave(&port_priv->reg_lock, flags);
mad_agent_priv->agent.hi_tid = ++ib_mad_client_id;
......@@ -386,7 +394,7 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device,
if (method) {
if (method_in_use(&method,
mad_reg_req))
goto error4;
goto error5;
}
}
ret2 = add_nonoui_reg_req(mad_reg_req, mad_agent_priv,
......@@ -402,14 +410,14 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device,
if (is_vendor_method_in_use(
vendor_class,
mad_reg_req))
goto error4;
goto error5;
}
}
ret2 = add_oui_reg_req(mad_reg_req, mad_agent_priv);
}
if (ret2) {
ret = ERR_PTR(ret2);
goto error4;
goto error5;
}
}
......@@ -418,9 +426,10 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device,
spin_unlock_irqrestore(&port_priv->reg_lock, flags);
return &mad_agent_priv->agent;
error4:
error5:
spin_unlock_irqrestore(&port_priv->reg_lock, flags);
ib_mad_agent_security_cleanup(&mad_agent_priv->agent);
error4:
kfree(reg_req);
error3:
kfree(mad_agent_priv);
......@@ -491,6 +500,7 @@ struct ib_mad_agent *ib_register_mad_snoop(struct ib_device *device,
struct ib_mad_agent *ret;
struct ib_mad_snoop_private *mad_snoop_priv;
int qpn;
int err;
/* Validate parameters */
if ((is_snooping_sends(mad_snoop_flags) && !snoop_handler) ||
......@@ -525,17 +535,25 @@ struct ib_mad_agent *ib_register_mad_snoop(struct ib_device *device,
mad_snoop_priv->agent.port_num = port_num;
mad_snoop_priv->mad_snoop_flags = mad_snoop_flags;
init_completion(&mad_snoop_priv->comp);
err = ib_mad_agent_security_setup(&mad_snoop_priv->agent, qp_type);
if (err) {
ret = ERR_PTR(err);
goto error2;
}
mad_snoop_priv->snoop_index = register_snoop_agent(
&port_priv->qp_info[qpn],
mad_snoop_priv);
if (mad_snoop_priv->snoop_index < 0) {
ret = ERR_PTR(mad_snoop_priv->snoop_index);
goto error2;
goto error3;
}
atomic_set(&mad_snoop_priv->refcount, 1);
return &mad_snoop_priv->agent;
error3:
ib_mad_agent_security_cleanup(&mad_snoop_priv->agent);
error2:
kfree(mad_snoop_priv);
error1:
......@@ -581,6 +599,8 @@ static void unregister_mad_agent(struct ib_mad_agent_private *mad_agent_priv)
deref_mad_agent(mad_agent_priv);
wait_for_completion(&mad_agent_priv->comp);
ib_mad_agent_security_cleanup(&mad_agent_priv->agent);
kfree(mad_agent_priv->reg_req);
kfree(mad_agent_priv);
}
......@@ -599,6 +619,8 @@ static void unregister_mad_snoop(struct ib_mad_snoop_private *mad_snoop_priv)
deref_snoop_agent(mad_snoop_priv);
wait_for_completion(&mad_snoop_priv->comp);
ib_mad_agent_security_cleanup(&mad_snoop_priv->agent);
kfree(mad_snoop_priv);
}
......@@ -1215,12 +1237,16 @@ int ib_post_send_mad(struct ib_mad_send_buf *send_buf,
/* Walk list of send WRs and post each on send list */
for (; send_buf; send_buf = next_send_buf) {
mad_send_wr = container_of(send_buf,
struct ib_mad_send_wr_private,
send_buf);
mad_agent_priv = mad_send_wr->mad_agent_priv;
ret = ib_mad_enforce_security(mad_agent_priv,
mad_send_wr->send_wr.pkey_index);
if (ret)
goto error;
if (!send_buf->mad_agent->send_handler ||
(send_buf->timeout_ms &&
!send_buf->mad_agent->recv_handler)) {
......@@ -1946,6 +1972,14 @@ static void ib_mad_complete_recv(struct ib_mad_agent_private *mad_agent_priv,
struct ib_mad_send_wr_private *mad_send_wr;
struct ib_mad_send_wc mad_send_wc;
unsigned long flags;
int ret;
ret = ib_mad_enforce_security(mad_agent_priv,
mad_recv_wc->wc->pkey_index);
if (ret) {
ib_free_recv_mad(mad_recv_wc);
deref_mad_agent(mad_agent_priv);
}
INIT_LIST_HEAD(&mad_recv_wc->rmpp_list);
list_add(&mad_recv_wc->recv_buf.list, &mad_recv_wc->rmpp_list);
......@@ -2003,6 +2037,8 @@ static void ib_mad_complete_recv(struct ib_mad_agent_private *mad_agent_priv,
mad_recv_wc);
deref_mad_agent(mad_agent_priv);
}
return;
}
static enum smi_action handle_ib_smi(const struct ib_mad_port_private *port_priv,
......
/*
* Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifdef CONFIG_SECURITY_INFINIBAND
#include <linux/security.h>
#include <linux/completion.h>
#include <linux/list.h>
#include <rdma/ib_verbs.h>
#include <rdma/ib_cache.h>
#include "core_priv.h"
#include "mad_priv.h"
static struct pkey_index_qp_list *get_pkey_idx_qp_list(struct ib_port_pkey *pp)
{
struct pkey_index_qp_list *pkey = NULL;
struct pkey_index_qp_list *tmp_pkey;
struct ib_device *dev = pp->sec->dev;
spin_lock(&dev->port_pkey_list[pp->port_num].list_lock);
list_for_each_entry(tmp_pkey,
&dev->port_pkey_list[pp->port_num].pkey_list,
pkey_index_list) {
if (tmp_pkey->pkey_index == pp->pkey_index) {
pkey = tmp_pkey;
break;
}
}
spin_unlock(&dev->port_pkey_list[pp->port_num].list_lock);
return pkey;
}
static int get_pkey_and_subnet_prefix(struct ib_port_pkey *pp,
u16 *pkey,
u64 *subnet_prefix)
{
struct ib_device *dev = pp->sec->dev;
int ret;
ret = ib_get_cached_pkey(dev, pp->port_num, pp->pkey_index, pkey);
if (ret)
return ret;
ret = ib_get_cached_subnet_prefix(dev, pp->port_num, subnet_prefix);
return ret;
}
static int enforce_qp_pkey_security(u16 pkey,
u64 subnet_prefix,
struct ib_qp_security *qp_sec)
{
struct ib_qp_security *shared_qp_sec;
int ret;
ret = security_ib_pkey_access(qp_sec->security, subnet_prefix, pkey);
if (ret)
return ret;
if (qp_sec->qp == qp_sec->qp->real_qp) {
list_for_each_entry(shared_qp_sec,
&qp_sec->shared_qp_list,
shared_qp_list) {
ret = security_ib_pkey_access(shared_qp_sec->security,
subnet_prefix,
pkey);
if (ret)
return ret;
}
}
return 0;
}
/* The caller of this function must hold the QP security
* mutex of the QP of the security structure in *pps.
*
* It takes separate ports_pkeys and security structure
* because in some cases the pps will be for a new settings
* or the pps will be for the real QP and security structure
* will be for a shared QP.
*/
static int check_qp_port_pkey_settings(struct ib_ports_pkeys *pps,
struct ib_qp_security *sec)
{
u64 subnet_prefix;
u16 pkey;
int ret = 0;
if (!pps)
return 0;
if (pps->main.state != IB_PORT_PKEY_NOT_VALID) {
get_pkey_and_subnet_prefix(&pps->main,
&pkey,
&subnet_prefix);
ret = enforce_qp_pkey_security(pkey,
subnet_prefix,
sec);
}
if (ret)
return ret;
if (pps->alt.state != IB_PORT_PKEY_NOT_VALID) {
get_pkey_and_subnet_prefix(&pps->alt,
&pkey,
&subnet_prefix);
ret = enforce_qp_pkey_security(pkey,
subnet_prefix,
sec);
}
return ret;
}
/* The caller of this function must hold the QP security
* mutex.
*/
static void qp_to_error(struct ib_qp_security *sec)
{
struct ib_qp_security *shared_qp_sec;
struct ib_qp_attr attr = {
.qp_state = IB_QPS_ERR
};
struct ib_event event = {
.event = IB_EVENT_QP_FATAL
};
/* If the QP is in the process of being destroyed
* the qp pointer in the security structure is
* undefined. It cannot be modified now.
*/
if (sec->destroying)
return;
ib_modify_qp(sec->qp,
&attr,
IB_QP_STATE);
if (sec->qp->event_handler && sec->qp->qp_context) {
event.element.qp = sec->qp;
sec->qp->event_handler(&event,
sec->qp->qp_context);
}
list_for_each_entry(shared_qp_sec,
&sec->shared_qp_list,
shared_qp_list) {
struct ib_qp *qp = shared_qp_sec->qp;
if (qp->event_handler && qp->qp_context) {
event.element.qp = qp;
event.device = qp->device;
qp->event_handler(&event,
qp->qp_context);
}
}
}
static inline void check_pkey_qps(struct pkey_index_qp_list *pkey,
struct ib_device *device,
u8 port_num,
u64 subnet_prefix)
{
struct ib_port_pkey *pp, *tmp_pp;
bool comp;
LIST_HEAD(to_error_list);
u16 pkey_val;
if (!ib_get_cached_pkey(device,
port_num,
pkey->pkey_index,
&pkey_val)) {
spin_lock(&pkey->qp_list_lock);
list_for_each_entry(pp, &pkey->qp_list, qp_list) {
if (atomic_read(&pp->sec->error_list_count))
continue;
if (enforce_qp_pkey_security(pkey_val,
subnet_prefix,
pp->sec)) {
atomic_inc(&pp->sec->error_list_count);
list_add(&pp->to_error_list,
&to_error_list);
}
}
spin_unlock(&pkey->qp_list_lock);
}
list_for_each_entry_safe(pp,
tmp_pp,
&to_error_list,
to_error_list) {
mutex_lock(&pp->sec->mutex);
qp_to_error(pp->sec);
list_del(&pp->to_error_list);
atomic_dec(&pp->sec->error_list_count);
comp = pp->sec->destroying;
mutex_unlock(&pp->sec->mutex);
if (comp)
complete(&pp->sec->error_complete);
}
}
/* The caller of this function must hold the QP security
* mutex.
*/
static int port_pkey_list_insert(struct ib_port_pkey *pp)
{
struct pkey_index_qp_list *tmp_pkey;
struct pkey_index_qp_list *pkey;
struct ib_device *dev;
u8 port_num = pp->port_num;
int ret = 0;
if (pp->state != IB_PORT_PKEY_VALID)
return 0;
dev = pp->sec->dev;
pkey = get_pkey_idx_qp_list(pp);
if (!pkey) {
bool found = false;
pkey = kzalloc(sizeof(*pkey), GFP_KERNEL);
if (!pkey)
return -ENOMEM;
spin_lock(&dev->port_pkey_list[port_num].list_lock);
/* Check for the PKey again. A racing process may
* have created it.
*/
list_for_each_entry(tmp_pkey,
&dev->port_pkey_list[port_num].pkey_list,
pkey_index_list) {
if (tmp_pkey->pkey_index == pp->pkey_index) {
kfree(pkey);
pkey = tmp_pkey;
found = true;
break;
}
}
if (!found) {
pkey->pkey_index = pp->pkey_index;
spin_lock_init(&pkey->qp_list_lock);
INIT_LIST_HEAD(&pkey->qp_list);
list_add(&pkey->pkey_index_list,
&dev->port_pkey_list[port_num].pkey_list);
}
spin_unlock(&dev->port_pkey_list[port_num].list_lock);
}
spin_lock(&pkey->qp_list_lock);
list_add(&pp->qp_list, &pkey->qp_list);
spin_unlock(&pkey->qp_list_lock);
pp->state = IB_PORT_PKEY_LISTED;
return ret;
}
/* The caller of this function must hold the QP security
* mutex.
*/
static void port_pkey_list_remove(struct ib_port_pkey *pp)
{
struct pkey_index_qp_list *pkey;
if (pp->state != IB_PORT_PKEY_LISTED)
return;
pkey = get_pkey_idx_qp_list(pp);
spin_lock(&pkey->qp_list_lock);
list_del(&pp->qp_list);
spin_unlock(&pkey->qp_list_lock);
/* The setting may still be valid, i.e. after
* a destroy has failed for example.
*/
pp->state = IB_PORT_PKEY_VALID;
}
static void destroy_qp_security(struct ib_qp_security *sec)
{
security_ib_free_security(sec->security);
kfree(sec->ports_pkeys);
kfree(sec);
}
/* The caller of this function must hold the QP security
* mutex.
*/
static struct ib_ports_pkeys *get_new_pps(const struct ib_qp *qp,
const struct ib_qp_attr *qp_attr,
int qp_attr_mask)
{
struct ib_ports_pkeys *new_pps;
struct ib_ports_pkeys *qp_pps = qp->qp_sec->ports_pkeys;
new_pps = kzalloc(sizeof(*new_pps), GFP_KERNEL);
if (!new_pps)
return NULL;
if (qp_attr_mask & (IB_QP_PKEY_INDEX | IB_QP_PORT)) {
if (!qp_pps) {
new_pps->main.port_num = qp_attr->port_num;
new_pps->main.pkey_index = qp_attr->pkey_index;
} else {
new_pps->main.port_num = (qp_attr_mask & IB_QP_PORT) ?
qp_attr->port_num :
qp_pps->main.port_num;
new_pps->main.pkey_index =
(qp_attr_mask & IB_QP_PKEY_INDEX) ?
qp_attr->pkey_index :
qp_pps->main.pkey_index;
}
new_pps->main.state = IB_PORT_PKEY_VALID;
} else if (qp_pps) {
new_pps->main.port_num = qp_pps->main.port_num;
new_pps->main.pkey_index = qp_pps->main.pkey_index;
if (qp_pps->main.state != IB_PORT_PKEY_NOT_VALID)
new_pps->main.state = IB_PORT_PKEY_VALID;
}
if (qp_attr_mask & IB_QP_ALT_PATH) {
new_pps->alt.port_num = qp_attr->alt_port_num;
new_pps->alt.pkey_index = qp_attr->alt_pkey_index;
new_pps->alt.state = IB_PORT_PKEY_VALID;
} else if (qp_pps) {
new_pps->alt.port_num = qp_pps->alt.port_num;
new_pps->alt.pkey_index = qp_pps->alt.pkey_index;
if (qp_pps->alt.state != IB_PORT_PKEY_NOT_VALID)
new_pps->alt.state = IB_PORT_PKEY_VALID;
}
new_pps->main.sec = qp->qp_sec;
new_pps->alt.sec = qp->qp_sec;
return new_pps;
}
int ib_open_shared_qp_security(struct ib_qp *qp, struct ib_device *dev)
{
struct ib_qp *real_qp = qp->real_qp;
int ret;
ret = ib_create_qp_security(qp, dev);
if (ret)
return ret;
mutex_lock(&real_qp->qp_sec->mutex);
ret = check_qp_port_pkey_settings(real_qp->qp_sec->ports_pkeys,
qp->qp_sec);
if (ret)
goto ret;
if (qp != real_qp)
list_add(&qp->qp_sec->shared_qp_list,
&real_qp->qp_sec->shared_qp_list);
ret:
mutex_unlock(&real_qp->qp_sec->mutex);
if (ret)
destroy_qp_security(qp->qp_sec);
return ret;
}
void ib_close_shared_qp_security(struct ib_qp_security *sec)
{
struct ib_qp *real_qp = sec->qp->real_qp;
mutex_lock(&real_qp->qp_sec->mutex);
list_del(&sec->shared_qp_list);
mutex_unlock(&real_qp->qp_sec->mutex);
destroy_qp_security(sec);
}
int ib_create_qp_security(struct ib_qp *qp, struct ib_device *dev)
{
int ret;
qp->qp_sec = kzalloc(sizeof(*qp->qp_sec), GFP_KERNEL);
if (!qp->qp_sec)
return -ENOMEM;
qp->qp_sec->qp = qp;
qp->qp_sec->dev = dev;
mutex_init(&qp->qp_sec->mutex);
INIT_LIST_HEAD(&qp->qp_sec->shared_qp_list);
atomic_set(&qp->qp_sec->error_list_count, 0);
init_completion(&qp->qp_sec->error_complete);
ret = security_ib_alloc_security(&qp->qp_sec->security);
if (ret)
kfree(qp->qp_sec);
return ret;
}
EXPORT_SYMBOL(ib_create_qp_security);
void ib_destroy_qp_security_begin(struct ib_qp_security *sec)
{
mutex_lock(&sec->mutex);
/* Remove the QP from the lists so it won't get added to
* a to_error_list during the destroy process.
*/
if (sec->ports_pkeys) {
port_pkey_list_remove(&sec->ports_pkeys->main);
port_pkey_list_remove(&sec->ports_pkeys->alt);
}
/* If the QP is already in one or more of those lists
* the destroying flag will ensure the to error flow
* doesn't operate on an undefined QP.
*/
sec->destroying = true;
/* Record the error list count to know how many completions
* to wait for.
*/
sec->error_comps_pending = atomic_read(&sec->error_list_count);
mutex_unlock(&sec->mutex);
}
void ib_destroy_qp_security_abort(struct ib_qp_security *sec)
{
int ret;
int i;
/* If a concurrent cache update is in progress this
* QP security could be marked for an error state
* transition. Wait for this to complete.
*/
for (i = 0; i < sec->error_comps_pending; i++)
wait_for_completion(&sec->error_complete);
mutex_lock(&sec->mutex);
sec->destroying = false;
/* Restore the position in the lists and verify
* access is still allowed in case a cache update
* occurred while attempting to destroy.
*
* Because these setting were listed already
* and removed during ib_destroy_qp_security_begin
* we know the pkey_index_qp_list for the PKey
* already exists so port_pkey_list_insert won't fail.
*/
if (sec->ports_pkeys) {
port_pkey_list_insert(&sec->ports_pkeys->main);
port_pkey_list_insert(&sec->ports_pkeys->alt);
}
ret = check_qp_port_pkey_settings(sec->ports_pkeys, sec);
if (ret)
qp_to_error(sec);
mutex_unlock(&sec->mutex);
}
void ib_destroy_qp_security_end(struct ib_qp_security *sec)
{
int i;
/* If a concurrent cache update is occurring we must
* wait until this QP security structure is processed
* in the QP to error flow before destroying it because
* the to_error_list is in use.
*/
for (i = 0; i < sec->error_comps_pending; i++)
wait_for_completion(&sec->error_complete);
destroy_qp_security(sec);
}
void ib_security_cache_change(struct ib_device *device,
u8 port_num,
u64 subnet_prefix)
{
struct pkey_index_qp_list *pkey;
list_for_each_entry(pkey,
&device->port_pkey_list[port_num].pkey_list,
pkey_index_list) {
check_pkey_qps(pkey,
device,
port_num,
subnet_prefix);
}
}
void ib_security_destroy_port_pkey_list(struct ib_device *device)
{
struct pkey_index_qp_list *pkey, *tmp_pkey;
int i;
for (i = rdma_start_port(device); i <= rdma_end_port(device); i++) {
spin_lock(&device->port_pkey_list[i].list_lock);
list_for_each_entry_safe(pkey,
tmp_pkey,
&device->port_pkey_list[i].pkey_list,
pkey_index_list) {
list_del(&pkey->pkey_index_list);
kfree(pkey);
}
spin_unlock(&device->port_pkey_list[i].list_lock);
}
}
int ib_security_modify_qp(struct ib_qp *qp,
struct ib_qp_attr *qp_attr,
int qp_attr_mask,
struct ib_udata *udata)
{
int ret = 0;
struct ib_ports_pkeys *tmp_pps;
struct ib_ports_pkeys *new_pps;
bool special_qp = (qp->qp_type == IB_QPT_SMI ||
qp->qp_type == IB_QPT_GSI ||
qp->qp_type >= IB_QPT_RESERVED1);
bool pps_change = ((qp_attr_mask & (IB_QP_PKEY_INDEX | IB_QP_PORT)) ||
(qp_attr_mask & IB_QP_ALT_PATH));
if (pps_change && !special_qp) {
mutex_lock(&qp->qp_sec->mutex);
new_pps = get_new_pps(qp,
qp_attr,
qp_attr_mask);
/* Add this QP to the lists for the new port
* and pkey settings before checking for permission
* in case there is a concurrent cache update
* occurring. Walking the list for a cache change
* doesn't acquire the security mutex unless it's
* sending the QP to error.
*/
ret = port_pkey_list_insert(&new_pps->main);
if (!ret)
ret = port_pkey_list_insert(&new_pps->alt);
if (!ret)
ret = check_qp_port_pkey_settings(new_pps,
qp->qp_sec);
}
if (!ret)
ret = qp->device->modify_qp(qp->real_qp,
qp_attr,
qp_attr_mask,
udata);
if (pps_change && !special_qp) {
/* Clean up the lists and free the appropriate
* ports_pkeys structure.
*/
if (ret) {
tmp_pps = new_pps;
} else {
tmp_pps = qp->qp_sec->ports_pkeys;
qp->qp_sec->ports_pkeys = new_pps;
}
if (tmp_pps) {
port_pkey_list_remove(&tmp_pps->main);
port_pkey_list_remove(&tmp_pps->alt);
}
kfree(tmp_pps);
mutex_unlock(&qp->qp_sec->mutex);
}
return ret;
}
EXPORT_SYMBOL(ib_security_modify_qp);
int ib_security_pkey_access(struct ib_device *dev,
u8 port_num,
u16 pkey_index,
void *sec)
{
u64 subnet_prefix;
u16 pkey;
int ret;
ret = ib_get_cached_pkey(dev, port_num, pkey_index, &pkey);
if (ret)
return ret;
ret = ib_get_cached_subnet_prefix(dev, port_num, &subnet_prefix);
if (ret)
return ret;
return security_ib_pkey_access(sec, subnet_prefix, pkey);
}
EXPORT_SYMBOL(ib_security_pkey_access);
static int ib_mad_agent_security_change(struct notifier_block *nb,
unsigned long event,
void *data)
{
struct ib_mad_agent *ag = container_of(nb, struct ib_mad_agent, lsm_nb);
if (event != LSM_POLICY_CHANGE)
return NOTIFY_DONE;
ag->smp_allowed = !security_ib_endport_manage_subnet(ag->security,
ag->device->name,
ag->port_num);
return NOTIFY_OK;
}
int ib_mad_agent_security_setup(struct ib_mad_agent *agent,
enum ib_qp_type qp_type)
{
int ret;
ret = security_ib_alloc_security(&agent->security);
if (ret)
return ret;
if (qp_type != IB_QPT_SMI)
return 0;
ret = security_ib_endport_manage_subnet(agent->security,
agent->device->name,
agent->port_num);
if (ret)
return ret;
agent->lsm_nb.notifier_call = ib_mad_agent_security_change;
ret = register_lsm_notifier(&agent->lsm_nb);
if (ret)
return ret;
agent->smp_allowed = true;
agent->lsm_nb_reg = true;
return 0;
}
void ib_mad_agent_security_cleanup(struct ib_mad_agent *agent)
{
security_ib_free_security(agent->security);
if (agent->lsm_nb_reg)
unregister_lsm_notifier(&agent->lsm_nb);
}
int ib_mad_enforce_security(struct ib_mad_agent_private *map, u16 pkey_index)
{
int ret;
if (map->agent.qp->qp_type == IB_QPT_SMI && !map->agent.smp_allowed)
return -EACCES;
ret = ib_security_pkey_access(map->agent.device,
map->agent.port_num,
pkey_index,
map->agent.security);
if (ret)
return ret;
return 0;
}
#endif /* CONFIG_SECURITY_INFINIBAND */
......@@ -1508,6 +1508,10 @@ static int create_qp(struct ib_uverbs_file *file,
}
if (cmd->qp_type != IB_QPT_XRC_TGT) {
ret = ib_create_qp_security(qp, device);
if (ret)
goto err_cb;
qp->real_qp = qp;
qp->device = device;
qp->pd = pd;
......@@ -2002,14 +2006,17 @@ static int modify_qp(struct ib_uverbs_file *file,
if (ret)
goto release_qp;
}
ret = qp->device->modify_qp(qp, attr,
ret = ib_security_modify_qp(qp,
attr,
modify_qp_mask(qp->qp_type,
cmd->base.attr_mask),
udata);
} else {
ret = ib_modify_qp(qp, attr,
modify_qp_mask(qp->qp_type,
cmd->base.attr_mask));
ret = ib_security_modify_qp(qp,
attr,
modify_qp_mask(qp->qp_type,
cmd->base.attr_mask),
NULL);
}
release_qp:
......
......@@ -44,6 +44,7 @@
#include <linux/in.h>
#include <linux/in6.h>
#include <net/addrconf.h>
#include <linux/security.h>
#include <rdma/ib_verbs.h>
#include <rdma/ib_cache.h>
......@@ -713,11 +714,19 @@ static struct ib_qp *__ib_open_qp(struct ib_qp *real_qp,
{
struct ib_qp *qp;
unsigned long flags;
int err;
qp = kzalloc(sizeof *qp, GFP_KERNEL);
if (!qp)
return ERR_PTR(-ENOMEM);
qp->real_qp = real_qp;
err = ib_open_shared_qp_security(qp, real_qp->device);
if (err) {
kfree(qp);
return ERR_PTR(err);
}
qp->real_qp = real_qp;
atomic_inc(&real_qp->usecnt);
qp->device = real_qp->device;
......@@ -804,6 +813,12 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
if (IS_ERR(qp))
return qp;
ret = ib_create_qp_security(qp, device);
if (ret) {
ib_destroy_qp(qp);
return ERR_PTR(ret);
}
qp->device = device;
qp->real_qp = qp;
qp->uobject = NULL;
......@@ -1266,7 +1281,7 @@ int ib_modify_qp(struct ib_qp *qp,
return ret;
}
return qp->device->modify_qp(qp->real_qp, qp_attr, qp_attr_mask, NULL);
return ib_security_modify_qp(qp->real_qp, qp_attr, qp_attr_mask, NULL);
}
EXPORT_SYMBOL(ib_modify_qp);
......@@ -1295,6 +1310,7 @@ int ib_close_qp(struct ib_qp *qp)
spin_unlock_irqrestore(&real_qp->device->event_handler_lock, flags);
atomic_dec(&real_qp->usecnt);
ib_close_shared_qp_security(qp->qp_sec);
kfree(qp);
return 0;
......@@ -1335,6 +1351,7 @@ int ib_destroy_qp(struct ib_qp *qp)
struct ib_cq *scq, *rcq;
struct ib_srq *srq;
struct ib_rwq_ind_table *ind_tbl;
struct ib_qp_security *sec;
int ret;
WARN_ON_ONCE(qp->mrs_used > 0);
......@@ -1350,6 +1367,9 @@ int ib_destroy_qp(struct ib_qp *qp)
rcq = qp->recv_cq;
srq = qp->srq;
ind_tbl = qp->rwq_ind_tbl;
sec = qp->qp_sec;
if (sec)
ib_destroy_qp_security_begin(sec);
if (!qp->uobject)
rdma_rw_cleanup_mrs(qp);
......@@ -1366,6 +1386,11 @@ int ib_destroy_qp(struct ib_qp *qp)
atomic_dec(&srq->usecnt);
if (ind_tbl)
atomic_dec(&ind_tbl->usecnt);
if (sec)
ib_destroy_qp_security_end(sec);
} else {
if (sec)
ib_destroy_qp_security_abort(sec);
}
return ret;
......
......@@ -2545,10 +2545,25 @@ EXPORT_SYMBOL_GPL(nfs_set_sb_security);
int nfs_clone_sb_security(struct super_block *s, struct dentry *mntroot,
struct nfs_mount_info *mount_info)
{
int error;
unsigned long kflags = 0, kflags_out = 0;
/* clone any lsm security options from the parent to the new sb */
if (d_inode(mntroot)->i_op != NFS_SB(s)->nfs_client->rpc_ops->dir_inode_ops)
return -ESTALE;
return security_sb_clone_mnt_opts(mount_info->cloned->sb, s);
if (NFS_SB(s)->caps & NFS_CAP_SECURITY_LABEL)
kflags |= SECURITY_LSM_NATIVE_LABELS;
error = security_sb_clone_mnt_opts(mount_info->cloned->sb, s, kflags,
&kflags_out);
if (error)
return error;
if (NFS_SB(s)->caps & NFS_CAP_SECURITY_LABEL &&
!(kflags_out & SECURITY_LSM_NATIVE_LABELS))
NFS_SB(s)->caps &= ~NFS_CAP_SECURITY_LABEL;
return 0;
}
EXPORT_SYMBOL_GPL(nfs_clone_sb_security);
......
......@@ -75,11 +75,17 @@ static inline void ima_add_kexec_buffer(struct kimage *image)
#endif
#ifdef CONFIG_IMA_APPRAISE
extern bool is_ima_appraise_enabled(void);
extern void ima_inode_post_setattr(struct dentry *dentry);
extern int ima_inode_setxattr(struct dentry *dentry, const char *xattr_name,
const void *xattr_value, size_t xattr_value_len);
extern int ima_inode_removexattr(struct dentry *dentry, const char *xattr_name);
#else
static inline bool is_ima_appraise_enabled(void)
{
return 0;
}
static inline void ima_inode_post_setattr(struct dentry *dentry)
{
return;
......
......@@ -21,6 +21,7 @@
#include <linux/path.h>
#include <linux/key.h>
#include <linux/skbuff.h>
#include <rdma/ib_verbs.h>
struct lsm_network_audit {
int netif;
......@@ -45,6 +46,16 @@ struct lsm_ioctlop_audit {
u16 cmd;
};
struct lsm_ibpkey_audit {
u64 subnet_prefix;
u16 pkey;
};
struct lsm_ibendport_audit {
char dev_name[IB_DEVICE_NAME_MAX];
u8 port;
};
/* Auxiliary data to use in generating the audit record. */
struct common_audit_data {
char type;
......@@ -60,6 +71,8 @@ struct common_audit_data {
#define LSM_AUDIT_DATA_DENTRY 10
#define LSM_AUDIT_DATA_IOCTL_OP 11
#define LSM_AUDIT_DATA_FILE 12
#define LSM_AUDIT_DATA_IBPKEY 13
#define LSM_AUDIT_DATA_IBENDPORT 14
union {
struct path path;
struct dentry *dentry;
......@@ -77,6 +90,8 @@ struct common_audit_data {
char *kmod_name;
struct lsm_ioctlop_audit *op;
struct file *file;
struct lsm_ibpkey_audit *ibpkey;
struct lsm_ibendport_audit *ibendport;
} u;
/* this union contains LSM specific data */
union {
......
......@@ -8,6 +8,7 @@
* Copyright (C) 2001 Silicon Graphics, Inc. (Trust Technology Group)
* Copyright (C) 2015 Intel Corporation.
* Copyright (C) 2015 Casey Schaufler <casey@schaufler-ca.com>
* Copyright (C) 2016 Mellanox Techonologies
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
......@@ -912,6 +913,26 @@
* associated with the TUN device's security structure.
* @security pointer to the TUN devices's security structure.
*
* Security hooks for Infiniband
*
* @ib_pkey_access:
* Check permission to access a pkey when modifing a QP.
* @subnet_prefix the subnet prefix of the port being used.
* @pkey the pkey to be accessed.
* @sec pointer to a security structure.
* @ib_endport_manage_subnet:
* Check permissions to send and receive SMPs on a end port.
* @dev_name the IB device name (i.e. mlx4_0).
* @port_num the port number.
* @sec pointer to a security structure.
* @ib_alloc_security:
* Allocate a security structure for Infiniband objects.
* @sec pointer to a security structure pointer.
* Returns 0 on success, non-zero on failure
* @ib_free_security:
* Deallocate an Infiniband security structure.
* @sec contains the security structure to be freed.
*
* Security hooks for XFRM operations.
*
* @xfrm_policy_alloc_security:
......@@ -1387,7 +1408,9 @@ union security_list_options {
unsigned long kern_flags,
unsigned long *set_kern_flags);
int (*sb_clone_mnt_opts)(const struct super_block *oldsb,
struct super_block *newsb);
struct super_block *newsb,
unsigned long kern_flags,
unsigned long *set_kern_flags);
int (*sb_parse_opts_str)(char *options, struct security_mnt_opts *opts);
int (*dentry_init_security)(struct dentry *dentry, int mode,
const struct qstr *name, void **ctx,
......@@ -1619,6 +1642,14 @@ union security_list_options {
int (*tun_dev_open)(void *security);
#endif /* CONFIG_SECURITY_NETWORK */
#ifdef CONFIG_SECURITY_INFINIBAND
int (*ib_pkey_access)(void *sec, u64 subnet_prefix, u16 pkey);
int (*ib_endport_manage_subnet)(void *sec, const char *dev_name,
u8 port_num);
int (*ib_alloc_security)(void **sec);
void (*ib_free_security)(void *sec);
#endif /* CONFIG_SECURITY_INFINIBAND */
#ifdef CONFIG_SECURITY_NETWORK_XFRM
int (*xfrm_policy_alloc_security)(struct xfrm_sec_ctx **ctxp,
struct xfrm_user_sec_ctx *sec_ctx,
......@@ -1850,6 +1881,12 @@ struct security_hook_heads {
struct list_head tun_dev_attach;
struct list_head tun_dev_open;
#endif /* CONFIG_SECURITY_NETWORK */
#ifdef CONFIG_SECURITY_INFINIBAND
struct list_head ib_pkey_access;
struct list_head ib_endport_manage_subnet;
struct list_head ib_alloc_security;
struct list_head ib_free_security;
#endif /* CONFIG_SECURITY_INFINIBAND */
#ifdef CONFIG_SECURITY_NETWORK_XFRM
struct list_head xfrm_policy_alloc_security;
struct list_head xfrm_policy_clone_security;
......
......@@ -6,6 +6,7 @@
* Copyright (C) 2001 Networks Associates Technology, Inc <ssmalley@nai.com>
* Copyright (C) 2001 James Morris <jmorris@intercode.com.au>
* Copyright (C) 2001 Silicon Graphics, Inc. (Trust Technology Group)
* Copyright (C) 2016 Mellanox Techonologies
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
......@@ -68,6 +69,10 @@ struct audit_krule;
struct user_namespace;
struct timezone;
enum lsm_event {
LSM_POLICY_CHANGE,
};
/* These functions are in security/commoncap.c */
extern int cap_capable(const struct cred *cred, struct user_namespace *ns,
int cap, int audit);
......@@ -163,6 +168,10 @@ struct security_mnt_opts {
int num_mnt_opts;
};
int call_lsm_notifier(enum lsm_event event, void *data);
int register_lsm_notifier(struct notifier_block *nb);
int unregister_lsm_notifier(struct notifier_block *nb);
static inline void security_init_mnt_opts(struct security_mnt_opts *opts)
{
opts->mnt_opts = NULL;
......@@ -240,7 +249,9 @@ int security_sb_set_mnt_opts(struct super_block *sb,
unsigned long kern_flags,
unsigned long *set_kern_flags);
int security_sb_clone_mnt_opts(const struct super_block *oldsb,
struct super_block *newsb);
struct super_block *newsb,
unsigned long kern_flags,
unsigned long *set_kern_flags);
int security_sb_parse_opts_str(char *options, struct security_mnt_opts *opts);
int security_dentry_init_security(struct dentry *dentry, int mode,
const struct qstr *name, void **ctx,
......@@ -381,6 +392,21 @@ int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen);
struct security_mnt_opts {
};
static inline int call_lsm_notifier(enum lsm_event event, void *data)
{
return 0;
}
static inline int register_lsm_notifier(struct notifier_block *nb)
{
return 0;
}
static inline int unregister_lsm_notifier(struct notifier_block *nb)
{
return 0;
}
static inline void security_init_mnt_opts(struct security_mnt_opts *opts)
{
}
......@@ -581,7 +607,9 @@ static inline int security_sb_set_mnt_opts(struct super_block *sb,
}
static inline int security_sb_clone_mnt_opts(const struct super_block *oldsb,
struct super_block *newsb)
struct super_block *newsb,
unsigned long kern_flags,
unsigned long *set_kern_flags)
{
return 0;
}
......@@ -1406,6 +1434,32 @@ static inline int security_tun_dev_open(void *security)
}
#endif /* CONFIG_SECURITY_NETWORK */
#ifdef CONFIG_SECURITY_INFINIBAND
int security_ib_pkey_access(void *sec, u64 subnet_prefix, u16 pkey);
int security_ib_endport_manage_subnet(void *sec, const char *name, u8 port_num);
int security_ib_alloc_security(void **sec);
void security_ib_free_security(void *sec);
#else /* CONFIG_SECURITY_INFINIBAND */
static inline int security_ib_pkey_access(void *sec, u64 subnet_prefix, u16 pkey)
{
return 0;
}
static inline int security_ib_endport_manage_subnet(void *sec, const char *dev_name, u8 port_num)
{
return 0;
}
static inline int security_ib_alloc_security(void **sec)
{
return 0;
}
static inline void security_ib_free_security(void *sec)
{
}
#endif /* CONFIG_SECURITY_INFINIBAND */
#ifdef CONFIG_SECURITY_NETWORK_XFRM
int security_xfrm_policy_alloc(struct xfrm_sec_ctx **ctxp,
......@@ -1651,6 +1705,10 @@ extern struct dentry *securityfs_create_file(const char *name, umode_t mode,
struct dentry *parent, void *data,
const struct file_operations *fops);
extern struct dentry *securityfs_create_dir(const char *name, struct dentry *parent);
struct dentry *securityfs_create_symlink(const char *name,
struct dentry *parent,
const char *target,
const struct inode_operations *iops);
extern void securityfs_remove(struct dentry *dentry);
#else /* CONFIG_SECURITYFS */
......@@ -1670,6 +1728,14 @@ static inline struct dentry *securityfs_create_file(const char *name,
return ERR_PTR(-ENODEV);
}
static inline struct dentry *securityfs_create_symlink(const char *name,
struct dentry *parent,
const char *target,
const struct inode_operations *iops)
{
return ERR_PTR(-ENODEV);
}
static inline void securityfs_remove(struct dentry *dentry)
{}
......
......@@ -575,6 +575,10 @@ struct ib_mad_agent {
u32 flags;
u8 port_num;
u8 rmpp_version;
void *security;
bool smp_allowed;
bool lsm_nb_reg;
struct notifier_block lsm_nb;
};
/**
......
......@@ -1614,6 +1614,45 @@ struct ib_rwq_ind_table_init_attr {
struct ib_wq **ind_tbl;
};
enum port_pkey_state {
IB_PORT_PKEY_NOT_VALID = 0,
IB_PORT_PKEY_VALID = 1,
IB_PORT_PKEY_LISTED = 2,
};
struct ib_qp_security;
struct ib_port_pkey {
enum port_pkey_state state;
u16 pkey_index;
u8 port_num;
struct list_head qp_list;
struct list_head to_error_list;
struct ib_qp_security *sec;
};
struct ib_ports_pkeys {
struct ib_port_pkey main;
struct ib_port_pkey alt;
};
struct ib_qp_security {
struct ib_qp *qp;
struct ib_device *dev;
/* Hold this mutex when changing port and pkey settings. */
struct mutex mutex;
struct ib_ports_pkeys *ports_pkeys;
/* A list of all open shared QP handles. Required to enforce security
* properly for all users of a shared QP.
*/
struct list_head shared_qp_list;
void *security;
bool destroying;
atomic_t error_list_count;
struct completion error_complete;
int error_comps_pending;
};
/*
* @max_write_sge: Maximum SGE elements per RDMA WRITE request.
* @max_read_sge: Maximum SGE elements per RDMA READ request.
......@@ -1643,6 +1682,7 @@ struct ib_qp {
u32 max_read_sge;
enum ib_qp_type qp_type;
struct ib_rwq_ind_table *rwq_ind_tbl;
struct ib_qp_security *qp_sec;
};
struct ib_mr {
......@@ -1891,6 +1931,7 @@ enum ib_mad_result {
};
struct ib_port_cache {
u64 subnet_prefix;
struct ib_pkey_cache *pkey;
struct ib_gid_table *gid;
u8 lmc;
......@@ -1940,6 +1981,12 @@ struct rdma_netdev {
union ib_gid *gid, u16 mlid);
};
struct ib_port_pkey_list {
/* Lock to hold while modifying the list. */
spinlock_t list_lock;
struct list_head pkey_list;
};
struct ib_device {
/* Do not access @dma_device directly from ULP nor from HW drivers. */
struct device *dma_device;
......@@ -1963,6 +2010,8 @@ struct ib_device {
int num_comp_vectors;
struct ib_port_pkey_list *port_pkey_list;
struct iw_cm_verbs *iwcm;
/**
......
......@@ -80,6 +80,8 @@
#define BTRFS_TEST_MAGIC 0x73727279
#define NSFS_MAGIC 0x6e736673
#define BPF_FS_MAGIC 0xcafe4a11
#define AAFS_MAGIC 0x5a3c69f0
/* Since UDF 2.01 is ISO 13346 based... */
#define UDF_SUPER_MAGIC 0x15013346
#define BALLOON_KVM_MAGIC 0x13661366
......
......@@ -46,4 +46,8 @@ struct vtpm_proxy_new_dev {
#define VTPM_PROXY_IOC_NEW_DEV _IOWR(0xa1, 0x00, struct vtpm_proxy_new_dev)
/* vendor specific commands to set locality */
#define TPM2_CC_SET_LOCALITY 0x20001000
#define TPM_ORD_SET_LOCALITY 0x20001000
#endif /* _UAPI_LINUX_VTPM_PROXY_H */
......@@ -13,7 +13,7 @@
* of Berkeley Packet Filters/Linux Socket Filters.
*/
#include <linux/atomic.h>
#include <linux/refcount.h>
#include <linux/audit.h>
#include <linux/compat.h>
#include <linux/coredump.h>
......@@ -56,7 +56,7 @@
* to a task_struct (other than @usage).
*/
struct seccomp_filter {
atomic_t usage;
refcount_t usage;
struct seccomp_filter *prev;
struct bpf_prog *prog;
};
......@@ -378,7 +378,7 @@ static struct seccomp_filter *seccomp_prepare_filter(struct sock_fprog *fprog)
return ERR_PTR(ret);
}
atomic_set(&sfilter->usage, 1);
refcount_set(&sfilter->usage, 1);
return sfilter;
}
......@@ -465,7 +465,7 @@ void get_seccomp_filter(struct task_struct *tsk)
if (!orig)
return;
/* Reference count is bounded by the number of total processes. */
atomic_inc(&orig->usage);
refcount_inc(&orig->usage);
}
static inline void seccomp_filter_free(struct seccomp_filter *filter)
......@@ -481,7 +481,7 @@ void put_seccomp_filter(struct task_struct *tsk)
{
struct seccomp_filter *orig = tsk->seccomp.filter;
/* Clean up single-reference branches iteratively. */
while (orig && atomic_dec_and_test(&orig->usage)) {
while (orig && refcount_dec_and_test(&orig->usage)) {
struct seccomp_filter *freeme = orig;
orig = orig->prev;
seccomp_filter_free(freeme);
......@@ -641,11 +641,12 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
return 0;
case SECCOMP_RET_KILL:
default: {
siginfo_t info;
default:
audit_seccomp(this_syscall, SIGSYS, action);
/* Dump core only if this is the last remaining thread. */
if (get_nr_threads(current) == 1) {
siginfo_t info;
/* Show the original registers in the dump. */
syscall_rollback(current, task_pt_regs(current));
/* Trigger a manual coredump since do_exit skips it. */
......@@ -654,7 +655,6 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
}
do_exit(SIGSYS);
}
}
unreachable();
......
......@@ -54,6 +54,15 @@ config SECURITY_NETWORK
implement socket and networking access controls.
If you are unsure how to answer this question, answer N.
config SECURITY_INFINIBAND
bool "Infiniband Security Hooks"
depends on SECURITY && INFINIBAND
help
This enables the Infiniband security hooks.
If enabled, a security module can use these hooks to
implement Infiniband access controls.
If you are unsure how to answer this question, answer N.
config SECURITY_NETWORK_XFRM
bool "XFRM (IPSec) Networking Security Hooks"
depends on XFRM && SECURITY_NETWORK
......@@ -139,7 +148,7 @@ config HARDENED_USERCOPY
copying memory to/from the kernel (via copy_to_user() and
copy_from_user() functions) by rejecting memory ranges that
are larger than the specified heap object, span multiple
separately allocates pages, are not on the process stack,
separately allocated pages, are not on the process stack,
or are part of the kernel text. This kills entire classes
of heap overflow exploits and similar kernel memory exposures.
......
......@@ -4,7 +4,7 @@ obj-$(CONFIG_SECURITY_APPARMOR) += apparmor.o
apparmor-y := apparmorfs.o audit.o capability.o context.o ipc.o lib.o match.o \
path.o domain.o policy.o policy_unpack.o procattr.o lsm.o \
resource.o secid.o file.o policy_ns.o
resource.o secid.o file.o policy_ns.o label.o
apparmor-$(CONFIG_SECURITY_APPARMOR_HASH) += crypto.o
clean-files := capability_names.h rlim_names.h
......@@ -20,7 +20,7 @@ cmd_make-caps = echo "static const char *const capability_names[] = {" > $@ ;\
sed $< >>$@ -r -n -e '/CAP_FS_MASK/d' \
-e 's/^\#define[ \t]+CAP_([A-Z0-9_]+)[ \t]+([0-9]+)/[\2] = "\L\1",/p';\
echo "};" >> $@ ;\
echo -n '\#define AA_FS_CAPS_MASK "' >> $@ ;\
printf '%s' '\#define AA_SFS_CAPS_MASK "' >> $@ ;\
sed $< -r -n -e '/CAP_FS_MASK/d' \
-e 's/^\#define[ \t]+CAP_([A-Z0-9_]+)[ \t]+([0-9]+)/\L\1/p' | \
tr '\n' ' ' | sed -e 's/ $$/"\n/' >> $@
......@@ -46,7 +46,7 @@ cmd_make-caps = echo "static const char *const capability_names[] = {" > $@ ;\
# #define RLIMIT_FSIZE 1 /* Maximum filesize */
# #define RLIMIT_STACK 3 /* max stack size */
# to
# #define AA_FS_RLIMIT_MASK "fsize stack"
# #define AA_SFS_RLIMIT_MASK "fsize stack"
quiet_cmd_make-rlim = GEN $@
cmd_make-rlim = echo "static const char *const rlim_names[RLIM_NLIMITS] = {" \
> $@ ;\
......@@ -56,7 +56,7 @@ cmd_make-rlim = echo "static const char *const rlim_names[RLIM_NLIMITS] = {" \
echo "static const int rlim_map[RLIM_NLIMITS] = {" >> $@ ;\
sed -r -n "s/^\# ?define[ \t]+(RLIMIT_[A-Z0-9_]+).*/\1,/p" $< >> $@ ;\
echo "};" >> $@ ; \
echo -n '\#define AA_FS_RLIMIT_MASK "' >> $@ ;\
printf '%s' '\#define AA_SFS_RLIMIT_MASK "' >> $@ ;\
sed -r -n 's/^\# ?define[ \t]+RLIMIT_([A-Z0-9_]+).*/\L\1/p' $< | \
tr '\n' ' ' | sed -e 's/ $$/"\n/' >> $@
......
此差异已折叠。
......@@ -77,14 +77,24 @@ static void audit_pre(struct audit_buffer *ab, void *ca)
audit_log_format(ab, " error=%d", aad(sa)->error);
}
if (aad(sa)->profile) {
struct aa_profile *profile = aad(sa)->profile;
if (profile->ns != root_ns) {
audit_log_format(ab, " namespace=");
audit_log_untrustedstring(ab, profile->ns->base.hname);
if (aad(sa)->label) {
struct aa_label *label = aad(sa)->label;
if (label_isprofile(label)) {
struct aa_profile *profile = labels_profile(label);
if (profile->ns != root_ns) {
audit_log_format(ab, " namespace=");
audit_log_untrustedstring(ab,
profile->ns->base.hname);
}
audit_log_format(ab, " profile=");
audit_log_untrustedstring(ab, profile->base.hname);
} else {
audit_log_format(ab, " label=");
aa_label_xaudit(ab, root_ns, label, FLAG_VIEW_SUBNS,
GFP_ATOMIC);
}
audit_log_format(ab, " profile=");
audit_log_untrustedstring(ab, profile->base.hname);
}
if (aad(sa)->name) {
......@@ -139,8 +149,7 @@ int aa_audit(int type, struct aa_profile *profile, struct common_audit_data *sa,
if (KILL_MODE(profile) && type == AUDIT_APPARMOR_DENIED)
type = AUDIT_APPARMOR_KILL;
if (!unconfined(profile))
aad(sa)->profile = profile;
aad(sa)->label = &profile->label;
aa_audit_msg(type, sa, cb);
......
......@@ -28,8 +28,8 @@
*/
#include "capability_names.h"
struct aa_fs_entry aa_fs_entry_caps[] = {
AA_FS_FILE_STRING("mask", AA_FS_CAPS_MASK),
struct aa_sfs_entry aa_sfs_entry_caps[] = {
AA_SFS_FILE_STRING("mask", AA_SFS_CAPS_MASK),
{ }
};
......@@ -48,15 +48,16 @@ static DEFINE_PER_CPU(struct audit_cache, audit_cache);
static void audit_cb(struct audit_buffer *ab, void *va)
{
struct common_audit_data *sa = va;
audit_log_format(ab, " capname=");
audit_log_untrustedstring(ab, capability_names[sa->u.cap]);
}
/**
* audit_caps - audit a capability
* @sa: audit data
* @profile: profile being tested for confinement (NOT NULL)
* @cap: capability tested
@audit: whether an audit record should be generated
* @error: error code returned by test
*
* Do auditing of capability and handle, audit/complain/kill modes switching
......@@ -64,16 +65,13 @@ static void audit_cb(struct audit_buffer *ab, void *va)
*
* Returns: 0 or sa->error on success, error code on failure
*/
static int audit_caps(struct aa_profile *profile, int cap, int audit,
int error)
static int audit_caps(struct common_audit_data *sa, struct aa_profile *profile,
int cap, int error)
{
struct audit_cache *ent;
int type = AUDIT_APPARMOR_AUTO;
DEFINE_AUDIT_DATA(sa, LSM_AUDIT_DATA_CAP, OP_CAPABLE);
sa.u.cap = cap;
aad(&sa)->error = error;
if (audit == SECURITY_CAP_NOAUDIT)
aad(&sa)->info = "optional: no audit";
aad(sa)->error = error;
if (likely(!error)) {
/* test if auditing is being forced */
......@@ -105,24 +103,44 @@ static int audit_caps(struct aa_profile *profile, int cap, int audit,
}
put_cpu_var(audit_cache);
return aa_audit(type, profile, &sa, audit_cb);
return aa_audit(type, profile, sa, audit_cb);
}
/**
* profile_capable - test if profile allows use of capability @cap
* @profile: profile being enforced (NOT NULL, NOT unconfined)
* @cap: capability to test if allowed
* @audit: whether an audit record should be generated
* @sa: audit data (MAY BE NULL indicating no auditing)
*
* Returns: 0 if allowed else -EPERM
*/
static int profile_capable(struct aa_profile *profile, int cap)
static int profile_capable(struct aa_profile *profile, int cap, int audit,
struct common_audit_data *sa)
{
return cap_raised(profile->caps.allow, cap) ? 0 : -EPERM;
int error;
if (cap_raised(profile->caps.allow, cap) &&
!cap_raised(profile->caps.denied, cap))
error = 0;
else
error = -EPERM;
if (audit == SECURITY_CAP_NOAUDIT) {
if (!COMPLAIN_MODE(profile))
return error;
/* audit the cap request in complain mode but note that it
* should be optional.
*/
aad(sa)->info = "optional: no audit";
}
return audit_caps(sa, profile, cap, error);
}
/**
* aa_capable - test permission to use capability
* @profile: profile being tested against (NOT NULL)
* @label: label being tested for capability (NOT NULL)
* @cap: capability to be tested
* @audit: whether an audit record should be generated
*
......@@ -130,14 +148,15 @@ static int profile_capable(struct aa_profile *profile, int cap)
*
* Returns: 0 on success, or else an error code.
*/
int aa_capable(struct aa_profile *profile, int cap, int audit)
int aa_capable(struct aa_label *label, int cap, int audit)
{
int error = profile_capable(profile, cap);
struct aa_profile *profile;
int error = 0;
DEFINE_AUDIT_DATA(sa, LSM_AUDIT_DATA_CAP, OP_CAPABLE);
if (audit == SECURITY_CAP_NOAUDIT) {
if (!COMPLAIN_MODE(profile))
return error;
}
sa.u.cap = cap;
error = fn_for_each_confined(label, profile,
profile_capable(profile, cap, audit, &sa));
return audit_caps(profile, cap, audit, error);
return error;
}
......@@ -14,9 +14,9 @@
*
*
* AppArmor sets confinement on every task, via the the aa_task_ctx and
* the aa_task_ctx.profile, both of which are required and are not allowed
* the aa_task_ctx.label, both of which are required and are not allowed
* to be NULL. The aa_task_ctx is not reference counted and is unique
* to each cred (which is reference count). The profile pointed to by
* to each cred (which is reference count). The label pointed to by
* the task_ctx is reference counted.
*
* TODO
......@@ -47,9 +47,9 @@ struct aa_task_ctx *aa_alloc_task_context(gfp_t flags)
void aa_free_task_context(struct aa_task_ctx *ctx)
{
if (ctx) {
aa_put_profile(ctx->profile);
aa_put_profile(ctx->previous);
aa_put_profile(ctx->onexec);
aa_put_label(ctx->label);
aa_put_label(ctx->previous);
aa_put_label(ctx->onexec);
kzfree(ctx);
}
......@@ -63,41 +63,41 @@ void aa_free_task_context(struct aa_task_ctx *ctx)
void aa_dup_task_context(struct aa_task_ctx *new, const struct aa_task_ctx *old)
{
*new = *old;
aa_get_profile(new->profile);
aa_get_profile(new->previous);
aa_get_profile(new->onexec);
aa_get_label(new->label);
aa_get_label(new->previous);
aa_get_label(new->onexec);
}
/**
* aa_get_task_profile - Get another task's profile
* aa_get_task_label - Get another task's label
* @task: task to query (NOT NULL)
*
* Returns: counted reference to @task's profile
* Returns: counted reference to @task's label
*/
struct aa_profile *aa_get_task_profile(struct task_struct *task)
struct aa_label *aa_get_task_label(struct task_struct *task)
{
struct aa_profile *p;
struct aa_label *p;
rcu_read_lock();
p = aa_get_profile(__aa_task_profile(task));
p = aa_get_newest_label(__aa_task_raw_label(task));
rcu_read_unlock();
return p;
}
/**
* aa_replace_current_profile - replace the current tasks profiles
* @profile: new profile (NOT NULL)
* aa_replace_current_label - replace the current tasks label
* @label: new label (NOT NULL)
*
* Returns: 0 or error on failure
*/
int aa_replace_current_profile(struct aa_profile *profile)
int aa_replace_current_label(struct aa_label *label)
{
struct aa_task_ctx *ctx = current_ctx();
struct cred *new;
AA_BUG(!profile);
AA_BUG(!label);
if (ctx->profile == profile)
if (ctx->label == label)
return 0;
if (current_cred() != current_real_cred())
......@@ -108,8 +108,8 @@ int aa_replace_current_profile(struct aa_profile *profile)
return -ENOMEM;
ctx = cred_ctx(new);
if (unconfined(profile) || (ctx->profile->ns != profile->ns))
/* if switching to unconfined or a different profile namespace
if (unconfined(label) || (labels_ns(ctx->label) != labels_ns(label)))
/* if switching to unconfined or a different label namespace
* clear out context state
*/
aa_clear_task_ctx_trans(ctx);
......@@ -120,9 +120,9 @@ int aa_replace_current_profile(struct aa_profile *profile)
* keeping @profile valid, so make sure to get its reference before
* dropping the reference on ctx->profile
*/
aa_get_profile(profile);
aa_put_profile(ctx->profile);
ctx->profile = profile;
aa_get_label(label);
aa_put_label(ctx->label);
ctx->label = label;
commit_creds(new);
return 0;
......@@ -130,11 +130,11 @@ int aa_replace_current_profile(struct aa_profile *profile)
/**
* aa_set_current_onexec - set the tasks change_profile to happen onexec
* @profile: system profile to set at exec (MAYBE NULL to clear value)
*
* @label: system label to set at exec (MAYBE NULL to clear value)
* @stack: whether stacking should be done
* Returns: 0 or error on failure
*/
int aa_set_current_onexec(struct aa_profile *profile)
int aa_set_current_onexec(struct aa_label *label, bool stack)
{
struct aa_task_ctx *ctx;
struct cred *new = prepare_creds();
......@@ -142,9 +142,10 @@ int aa_set_current_onexec(struct aa_profile *profile)
return -ENOMEM;
ctx = cred_ctx(new);
aa_get_profile(profile);
aa_put_profile(ctx->onexec);
ctx->onexec = profile;
aa_get_label(label);
aa_clear_task_ctx_trans(ctx);
ctx->onexec = label;
ctx->token = stack;
commit_creds(new);
return 0;
......@@ -152,7 +153,7 @@ int aa_set_current_onexec(struct aa_profile *profile)
/**
* aa_set_current_hat - set the current tasks hat
* @profile: profile to set as the current hat (NOT NULL)
* @label: label to set as the current hat (NOT NULL)
* @token: token value that must be specified to change from the hat
*
* Do switch of tasks hat. If the task is currently in a hat
......@@ -160,29 +161,29 @@ int aa_set_current_onexec(struct aa_profile *profile)
*
* Returns: 0 or error on failure
*/
int aa_set_current_hat(struct aa_profile *profile, u64 token)
int aa_set_current_hat(struct aa_label *label, u64 token)
{
struct aa_task_ctx *ctx;
struct cred *new = prepare_creds();
if (!new)
return -ENOMEM;
AA_BUG(!profile);
AA_BUG(!label);
ctx = cred_ctx(new);
if (!ctx->previous) {
/* transfer refcount */
ctx->previous = ctx->profile;
ctx->previous = ctx->label;
ctx->token = token;
} else if (ctx->token == token) {
aa_put_profile(ctx->profile);
aa_put_label(ctx->label);
} else {
/* previous_profile && ctx->token != token */
abort_creds(new);
return -EACCES;
}
ctx->profile = aa_get_newest_profile(profile);
ctx->label = aa_get_newest_label(label);
/* clear exec on switching context */
aa_put_profile(ctx->onexec);
aa_put_label(ctx->onexec);
ctx->onexec = NULL;
commit_creds(new);
......@@ -190,15 +191,15 @@ int aa_set_current_hat(struct aa_profile *profile, u64 token)
}
/**
* aa_restore_previous_profile - exit from hat context restoring the profile
* aa_restore_previous_label - exit from hat context restoring previous label
* @token: the token that must be matched to exit hat context
*
* Attempt to return out of a hat to the previous profile. The token
* Attempt to return out of a hat to the previous label. The token
* must match the stored token value.
*
* Returns: 0 or error of failure
*/
int aa_restore_previous_profile(u64 token)
int aa_restore_previous_label(u64 token)
{
struct aa_task_ctx *ctx;
struct cred *new = prepare_creds();
......@@ -210,15 +211,15 @@ int aa_restore_previous_profile(u64 token)
abort_creds(new);
return -EACCES;
}
/* ignore restores when there is no saved profile */
/* ignore restores when there is no saved label */
if (!ctx->previous) {
abort_creds(new);
return 0;
}
aa_put_profile(ctx->profile);
ctx->profile = aa_get_newest_profile(ctx->previous);
AA_BUG(!ctx->profile);
aa_put_label(ctx->label);
ctx->label = aa_get_newest_label(ctx->previous);
AA_BUG(!ctx->label);
/* clear exec && prev information when restoring to previous context */
aa_clear_task_ctx_trans(ctx);
......
此差异已折叠。
此差异已折叠。
......@@ -4,7 +4,7 @@
* This file contains AppArmor basic global
*
* Copyright (C) 1998-2008 Novell/SUSE
* Copyright 2009-2010 Canonical Ltd.
* Copyright 2009-2017 Canonical Ltd.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
......@@ -27,8 +27,10 @@
#define AA_CLASS_NET 4
#define AA_CLASS_RLIMITS 5
#define AA_CLASS_DOMAIN 6
#define AA_CLASS_PTRACE 9
#define AA_CLASS_LABEL 16
#define AA_CLASS_LAST AA_CLASS_DOMAIN
#define AA_CLASS_LAST AA_CLASS_LABEL
/* Control parameters settable through module/boot flags */
extern enum audit_mode aa_g_audit;
......
......@@ -17,49 +17,49 @@
extern struct path aa_null;
enum aa_fs_type {
AA_FS_TYPE_BOOLEAN,
AA_FS_TYPE_STRING,
AA_FS_TYPE_U64,
AA_FS_TYPE_FOPS,
AA_FS_TYPE_DIR,
enum aa_sfs_type {
AA_SFS_TYPE_BOOLEAN,
AA_SFS_TYPE_STRING,
AA_SFS_TYPE_U64,
AA_SFS_TYPE_FOPS,
AA_SFS_TYPE_DIR,
};
struct aa_fs_entry;
struct aa_sfs_entry;
struct aa_fs_entry {
struct aa_sfs_entry {
const char *name;
struct dentry *dentry;
umode_t mode;
enum aa_fs_type v_type;
enum aa_sfs_type v_type;
union {
bool boolean;
char *string;
unsigned long u64;
struct aa_fs_entry *files;
struct aa_sfs_entry *files;
} v;
const struct file_operations *file_ops;
};
extern const struct file_operations aa_fs_seq_file_ops;
extern const struct file_operations aa_sfs_seq_file_ops;
#define AA_FS_FILE_BOOLEAN(_name, _value) \
#define AA_SFS_FILE_BOOLEAN(_name, _value) \
{ .name = (_name), .mode = 0444, \
.v_type = AA_FS_TYPE_BOOLEAN, .v.boolean = (_value), \
.file_ops = &aa_fs_seq_file_ops }
#define AA_FS_FILE_STRING(_name, _value) \
.v_type = AA_SFS_TYPE_BOOLEAN, .v.boolean = (_value), \
.file_ops = &aa_sfs_seq_file_ops }
#define AA_SFS_FILE_STRING(_name, _value) \
{ .name = (_name), .mode = 0444, \
.v_type = AA_FS_TYPE_STRING, .v.string = (_value), \
.file_ops = &aa_fs_seq_file_ops }
#define AA_FS_FILE_U64(_name, _value) \
.v_type = AA_SFS_TYPE_STRING, .v.string = (_value), \
.file_ops = &aa_sfs_seq_file_ops }
#define AA_SFS_FILE_U64(_name, _value) \
{ .name = (_name), .mode = 0444, \
.v_type = AA_FS_TYPE_U64, .v.u64 = (_value), \
.file_ops = &aa_fs_seq_file_ops }
#define AA_FS_FILE_FOPS(_name, _mode, _fops) \
{ .name = (_name), .v_type = AA_FS_TYPE_FOPS, \
.v_type = AA_SFS_TYPE_U64, .v.u64 = (_value), \
.file_ops = &aa_sfs_seq_file_ops }
#define AA_SFS_FILE_FOPS(_name, _mode, _fops) \
{ .name = (_name), .v_type = AA_SFS_TYPE_FOPS, \
.mode = (_mode), .file_ops = (_fops) }
#define AA_FS_DIR(_name, _value) \
{ .name = (_name), .v_type = AA_FS_TYPE_DIR, .v.files = (_value) }
#define AA_SFS_DIR(_name, _value) \
{ .name = (_name), .v_type = AA_SFS_TYPE_DIR, .v.files = (_value) }
extern void __init aa_destroy_aafs(void);
......@@ -74,6 +74,7 @@ enum aafs_ns_type {
AAFS_NS_LOAD,
AAFS_NS_REPLACE,
AAFS_NS_REMOVE,
AAFS_NS_REVISION,
AAFS_NS_COUNT,
AAFS_NS_MAX_COUNT,
AAFS_NS_SIZE,
......@@ -102,16 +103,22 @@ enum aafs_prof_type {
#define ns_subload(X) ((X)->dents[AAFS_NS_LOAD])
#define ns_subreplace(X) ((X)->dents[AAFS_NS_REPLACE])
#define ns_subremove(X) ((X)->dents[AAFS_NS_REMOVE])
#define ns_subrevision(X) ((X)->dents[AAFS_NS_REVISION])
#define prof_dir(X) ((X)->dents[AAFS_PROF_DIR])
#define prof_child_dir(X) ((X)->dents[AAFS_PROF_PROFS])
void __aa_fs_profile_rmdir(struct aa_profile *profile);
void __aa_fs_profile_migrate_dents(struct aa_profile *old,
void __aa_bump_ns_revision(struct aa_ns *ns);
void __aafs_profile_rmdir(struct aa_profile *profile);
void __aafs_profile_migrate_dents(struct aa_profile *old,
struct aa_profile *new);
int __aa_fs_profile_mkdir(struct aa_profile *profile, struct dentry *parent);
void __aa_fs_ns_rmdir(struct aa_ns *ns);
int __aa_fs_ns_mkdir(struct aa_ns *ns, struct dentry *parent,
const char *name);
int __aafs_profile_mkdir(struct aa_profile *profile, struct dentry *parent);
void __aafs_ns_rmdir(struct aa_ns *ns);
int __aafs_ns_mkdir(struct aa_ns *ns, struct dentry *parent, const char *name,
struct dentry *dent);
struct aa_loaddata;
void __aa_fs_remove_rawdata(struct aa_loaddata *rawdata);
int __aa_fs_create_rawdata(struct aa_ns *ns, struct aa_loaddata *rawdata);
#endif /* __AA_APPARMORFS_H */
......@@ -22,8 +22,7 @@
#include <linux/slab.h>
#include "file.h"
struct aa_profile;
#include "label.h"
extern const char *const audit_mode_names[];
#define AUDIT_MAX_INDEX 5
......@@ -65,10 +64,12 @@ enum audit_type {
#define OP_GETATTR "getattr"
#define OP_OPEN "open"
#define OP_FRECEIVE "file_receive"
#define OP_FPERM "file_perm"
#define OP_FLOCK "file_lock"
#define OP_FMMAP "file_mmap"
#define OP_FMPROT "file_mprotect"
#define OP_INHERIT "file_inherit"
#define OP_CREATE "create"
#define OP_POST_CREATE "post_create"
......@@ -91,6 +92,8 @@ enum audit_type {
#define OP_CHANGE_HAT "change_hat"
#define OP_CHANGE_PROFILE "change_profile"
#define OP_CHANGE_ONEXEC "change_onexec"
#define OP_STACK "stack"
#define OP_STACK_ONEXEC "stack_onexec"
#define OP_SETPROCATTR "setprocattr"
#define OP_SETRLIMIT "setrlimit"
......@@ -102,19 +105,19 @@ enum audit_type {
struct apparmor_audit_data {
int error;
const char *op;
int type;
void *profile;
const char *op;
struct aa_label *label;
const char *name;
const char *info;
u32 request;
u32 denied;
union {
/* these entries require a custom callback fn */
struct {
struct aa_profile *peer;
struct aa_label *peer;
struct {
const char *target;
u32 request;
u32 denied;
kuid_t ouid;
} fs;
};
......
......@@ -19,11 +19,12 @@
#include "apparmorfs.h"
struct aa_profile;
struct aa_label;
/* aa_caps - confinement data for capabilities
* @allowed: capabilities mask
* @audit: caps that are to be audited
* @denied: caps that are explicitly denied
* @quiet: caps that should not be audited
* @kill: caps that when requested will result in the task being killed
* @extended: caps that are subject finer grained mediation
......@@ -31,14 +32,15 @@ struct aa_profile;
struct aa_caps {
kernel_cap_t allow;
kernel_cap_t audit;
kernel_cap_t denied;
kernel_cap_t quiet;
kernel_cap_t kill;
kernel_cap_t extended;
};
extern struct aa_fs_entry aa_fs_entry_caps[];
extern struct aa_sfs_entry aa_sfs_entry_caps[];
int aa_capable(struct aa_profile *profile, int cap, int audit);
int aa_capable(struct aa_label *label, int cap, int audit);
static inline void aa_free_cap_rules(struct aa_caps *caps)
{
......
此差异已折叠。
......@@ -23,14 +23,17 @@ struct aa_domain {
char **table;
};
#define AA_CHANGE_NOFLAGS 0
#define AA_CHANGE_TEST 1
#define AA_CHANGE_CHILD 2
#define AA_CHANGE_ONEXEC 4
#define AA_CHANGE_STACK 8
int apparmor_bprm_set_creds(struct linux_binprm *bprm);
int apparmor_bprm_secureexec(struct linux_binprm *bprm);
void apparmor_bprm_committing_creds(struct linux_binprm *bprm);
void apparmor_bprm_committed_creds(struct linux_binprm *bprm);
void aa_free_domain_entries(struct aa_domain *domain);
int aa_change_hat(const char *hats[], int count, u64 token, bool permtest);
int aa_change_profile(const char *fqname, bool onexec, bool permtest,
bool stack);
int aa_change_hat(const char *hats[], int count, u64 token, int flags);
int aa_change_profile(const char *fqname, int flags);
#endif /* __AA_DOMAIN_H */
此差异已折叠。
......@@ -4,7 +4,7 @@
* This file contains AppArmor ipc mediation function definitions.
*
* Copyright (C) 1998-2008 Novell/SUSE
* Copyright 2009-2010 Canonical Ltd.
* Copyright 2009-2017 Canonical Ltd.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
......@@ -19,10 +19,16 @@
struct aa_profile;
int aa_may_ptrace(struct aa_profile *tracer, struct aa_profile *tracee,
unsigned int mode);
#define AA_PTRACE_TRACE MAY_WRITE
#define AA_PTRACE_READ MAY_READ
#define AA_MAY_BE_TRACED AA_MAY_APPEND
#define AA_MAY_BE_READ AA_MAY_CREATE
#define PTRACE_PERM_SHIFT 2
int aa_ptrace(struct task_struct *tracer, struct task_struct *tracee,
unsigned int mode);
#define AA_PTRACE_PERM_MASK (AA_PTRACE_READ | AA_PTRACE_TRACE | \
AA_MAY_BE_READ | AA_MAY_BE_TRACED)
int aa_may_ptrace(struct aa_label *tracer, struct aa_label *tracee,
u32 request);
#endif /* __AA_IPC_H */
此差异已折叠。
此差异已折叠。
......@@ -23,11 +23,12 @@ enum path_flags {
PATH_CHROOT_NSCONNECT = 0x10, /* connect paths that are at ns root */
PATH_DELEGATE_DELETED = 0x08000, /* delegate deleted files */
PATH_MEDIATE_DELETED = 0x10000, /* mediate deleted paths */
PATH_MEDIATE_DELETED = 0x10000, /* mediate deleted paths */
};
int aa_path_name(const struct path *path, int flags, char **buffer,
const char **name, const char **info);
int aa_path_name(const struct path *path, int flags, char *buffer,
const char **name, const char **info,
const char *disconnected);
#define MAX_PATH_BUFFERS 2
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -182,7 +182,7 @@ security_initcall(integrity_iintcache_init);
*
*/
int integrity_kernel_read(struct file *file, loff_t offset,
char *addr, unsigned long count)
void *addr, unsigned long count)
{
mm_segment_t old_fs;
char __user *buf = (char __user *)addr;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册