in utils decode_ind_ab(), the calculations is
data_a = data_q/opt.A
data_b = data_q - data_a*opt.A
data_ab = torch.cat((data_a, data_b), dim=1)
however I believe according to how the encoding was done we should have instead something like
data_a = (data_q - data_b)/opt.A
I'm imagining this would have to be solved using linear programming or something.
I was just wondering if this is something you're aware of and whether I am missing something?
My issue is that when I use decode_ind_ab currently all my b values come through as -1 as
with
data_a = data_q/opt.A (eq1)
data_b = data_q - data_a*opt.A (eq2)
we can sub eq1 into eq2 to show that
data_b = data_q - data_q = 0
which then gets scaled and shifted to -1 before being returned.
in utils decode_ind_ab(), the calculations is
however I believe according to how the encoding was done we should have instead something like
data_a = (data_q - data_b)/opt.AI'm imagining this would have to be solved using linear programming or something.
I was just wondering if this is something you're aware of and whether I am missing something?
My issue is that when I use decode_ind_ab currently all my b values come through as -1 as
with
we can sub eq1 into eq2 to show that
which then gets scaled and shifted to -1 before being returned.