bpf: possibly avoid extra masking for narrower load in verifier
authorYonghong Song <yhs@fb.com>
Thu, 22 Jun 2017 22:07:39 +0000 (15:07 -0700)
committerDavid S. Miller <davem@davemloft.net>
Fri, 23 Jun 2017 18:04:11 +0000 (14:04 -0400)
commit239946314e57711d7da546b67964d0b387a3ee42
tree958d35fbbbc439b561832c75de22f5fdfa825f7c
parent72de46556f8a291b2c72ea1fa22275ffef85e4f9
bpf: possibly avoid extra masking for narrower load in verifier

Commit 31fd85816dbe ("bpf: permits narrower load from bpf program
context fields") permits narrower load for certain ctx fields.
The commit however will already generate a masking even if
the prog-specific ctx conversion produces the result with
narrower size.

For example, for __sk_buff->protocol, the ctx conversion
loads the data into register with 2-byte load.
A narrower 2-byte load should not generate masking.
For __sk_buff->vlan_present, the conversion function
set the result as either 0 or 1, essentially a byte.
The narrower 2-byte or 1-byte load should not generate masking.

To avoid unnecessary masking, prog-specific *_is_valid_access
now passes converted_op_size back to verifier, which indicates
the valid data width after perceived future conversion.
Based on this information, verifier is able to avoid
unnecessary marking.

Since we want more information back from prog-specific
*_is_valid_access checking, all of them are packed into
one data structure for more clarity.

Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
include/linux/bpf.h
include/linux/bpf_verifier.h
kernel/bpf/verifier.c
kernel/trace/bpf_trace.c
net/core/filter.c