Canonicalize __atomic/sync_fetch_or/xor/and for constant mask.
Canoicalize & and nop_convert order for
__atomic_fetch_or_*, __atomic_fetch_xor_*,
__atomic_xor_fetch_*,__sync_fetch_and_or_*,
__sync_fetch_and_xor_*,__sync_xor_and_fetch_*,
__atomic_fetch_and_*,__sync_fetch_and_and_* when mask is constant.
.i.e.
+/* Canonicalize
+ _1 = __atomic_fetch_or_4 (&v, 1, 0);
+ _2 = (int) _1;
+ _5 = _2 & 1;
+
+to
+
+ _1 = __atomic_fetch_or_4 (&v, 1, 0);
+ _2 = _1 & 1;
+ _5 = (int) _2;
+/* Convert
+ _1 = __atomic_fetch_and_4 (a_6(D), 4294959103, 0);
+ _2 = (int) _1;
+ _3 = _2 & 8192;
+to
+ _1 = __atomic_fetch_and_4 (a_4(D), 4294959103, 0);
+ _7 = _1 & 8192;
+ _6 = (int) _7;
+ So it can be handled by optimize_atomic_bit_test_and. */
I'm trying to rewrite match part in match.pd and find the
canonicalization is ok when mask is constant, but not for variable
since it will be simplified back by
/* In GIMPLE, getting rid of 2 conversions for one new results
in smaller IL. */
(simplify
(convert (bitop:cs@2 (nop_convert:s @0) @1))
(if (GIMPLE
&& TREE_CODE (@1) != INTEGER_CST
&& tree_nop_conversion_p (type, TREE_TYPE (@2))
&& types_match (type, @0))
(bitop @0 (convert @1)))))
The canonicalization for variabled is like
convert
_1 = ~mask_7;
_2 = (unsigned int) _1;
_3 = __atomic_fetch_and_4 (ptr_6, _2, 0);
_4 = (int) _3;
_5 = _4 & mask_7;
to
_1 = ~mask_7;
_2 = (unsigned int) _1;
_3 = __atomic_fetch_and_4 (ptr_6, _2, 0);
_4 = (unsigned int) mask_7
_6 = _3 & _4
_5 = (int) _6
and be simplified back.
I've also tried another way of simplication like
convert
_1 = ~mask_7;
_2 = (unsigned int) _1;
_3 = __atomic_fetch_and_4 (ptr_6, _2, 0);
_4 = (int) _3;
_5 = _4 & mask_7;
to
_1 = (unsigned int)mask_7;
_2 = ~ _1;
_3 = __atomic_fetch_and_4 (ptr_6, _2, 0);
_6 = _3 & _1
_5 = (int)
but it's prevent by below since __atomic_fetch_and_4 is not CONST, but
we need to regenerate it with updated parameter.
/* We can't and should not emit calls to non-const functions. */
if (!(flags_from_decl_or_type (decl) & ECF_CONST))
return NULL;
Edited by hongtao