Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion R/ocPredprob.R
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ h_get_decision_one_predprob <- function(nnr, truep, p0, parE = c(1, 1), nnE, nnF
thetaT = tT,
parE = parE
)$result
decision <- ifelse(interim_qU > phiU, FALSE, decision)
decision <- ifelse(interim_qU > phiU, TRUE, decision)
all_looks <- orig_nnr[index_look]
}
if (size_look %in% nnF) {
Expand Down
28 changes: 28 additions & 0 deletions tests/testthat/test-ocPredprob.R
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,34 @@ test_that("h_get_decision_one_predprob gives correct result and list", {
expect_list(result)
})

test_that("Go decision for all interims when criteria met for decision 1", {
set.seed(40)
expect_warning(
res <- ocPredprob(
decision1 = TRUE,
nnE = c(20, 30, 40),
nnF = c(20, 30, 40),
truep = 0.90,
phiU = 0.75,
phiL = 0.20,
tT = 0.70,
p0 = 0.20,
parE = c(0.2, 0.8),
sim = 100
),
"Advise to use sim >= 50000 to achieve convergence"
)
random_number <- sample(x = length(res$Decision), size = 1)
expect_true(all(res$Decision) == TRUE)
expect_true(all(res$SampleSize == 20))
expect_true(res$params$decision1 == TRUE)
expect_equal(res$oc$ExpectedN, res$SampleSize[random_number])
expect_identical(res$oc$PrEfficacy, 1)
expect_identical(res$oc$PrEarlyEff, 1)
expect_identical(res$oc$PrEarlyFut, 0)
expect_identical(res$oc$PrFutility, 0)
})

# h_get_decision_two_predprob ----
test_that("h_get_decision_two_predprob gives correct result and list", {
set.seed(1989)
Expand Down
75 changes: 46 additions & 29 deletions tests/testthat/test-predprobDist.R
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ test_that("h_predprobdist_single_arm gives correct results", {
)
expect_equal(result$result, 0.7081907, tolerance = 1e-4)
expect_equal(sum(result$table$density), 1, tolerance = 1e-4)
expect_true(all(result$posterior) <= 1)
expect_true(all(result$table$posterior <= 1))
})

test_that("h_predprobdist_single_arm gives higher predictive probability when thetaT is lower", {
Expand Down Expand Up @@ -99,7 +99,7 @@ test_that("h_predprobdist_single_arm gives correct list", {

# h_predprobdist ----
test_that("h_predprobdist gives correct list", {
result <- h_predprobdist(
warnings <- capture_warnings(result <- h_predprobdist(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @danielinteractive , in h_getBetamixPost, I added a warning a few issues ago, for input weights that don't add up to 1. The test for anything that relies on this helper therefore needs to handle the repeated warnings and I found this solution.

This solution may be measured to the risk of the wrong user weight input. If we believe so, we can incorporate the multiple warnings in the test, to ensure it gives the correct number of the same warnings.

This solution may be seen as an overkill - if we remove the warning of "Weights have been corrected", and someone decides to add it later on, it will cause a chain of warnings again in all the call functions that have h_getBetamixPost as their helper function.

Thought I'd check in with your views on the what the risk vs benefit is before I add it to the other tests. Thanks!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @audreyyeoCH , good point. I think for the tests it might be ok, but I am more concerned then about the user experience: Is this helper function called inside simulation loops somewhere e.g. and might trigger as many warnings as there are simulation iterations?

If yes, I think we need to do something about it.

However, for the topic of this PR this seems tangential at best, so it would be better to handle this in a separate issue / PR if possible. That is, can we not leave this particular test as it was before in this PR?

Copy link
Collaborator Author

@audreyyeoCH audreyyeoCH Jan 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for looking into it @danielinteractive.

Is this helper function called inside simulation loops somewhere e.g. and might trigger as many warnings as there are simulation iterations?

edit: the number of warnings is not the number of simulations. It is the number of arms + number of posteriors calculated * number of arms. In addition, the number of warnings is capped at 50, with this warning

There were 50 or more warnings (use warnings() to see the first 50)

The number of posteriors in this scenario is:

for active arm : 17 patients left to recruit, therefore 18 possible results (0....17 responders)
for control arm: 10 patients left to recruit, therefore 11 possible results (0...11 responders).

This is an array of 18 x 11 elements, making it a total of 198 possible combination of results from each arm. We have two warnings per posterior because there are 2 arms, making it a sum of 396 warnings.

plus the posterior from calculating each arm. This is 2.

In total it is 396 + 2 warnings = 398 warnings. The number of same warnings is 398.

I do not believe it affects the user experience, because of the one line of warning above. Thus I feel we can go either way to carry on with ensuring all tests capture warnings or completely remove the warning from h_getBetamixPost, such that the user carries their own risk of mis-specifying their weights since ... $params will show what weights were used. What do you think ?

However, for the topic of this PR this seems tangential at best, so it would be better to handle this in a separate issue / PR if possible. That is, can we not leave this particular test as it was before in this PR?

I agree it seems tangential, but it is causing the checks not to pass. Is there another way around this that I am not seeing ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK I ran checks on the main branch and they are failing because of the warnings (and few other things):
https://github.com/Genentech/phase1b/actions/runs/20737971973/job/59539040271

I would recommend to first make a separate issue/PR to fix this, get the main branch into a clean state, and then continue with this issue/PR.

As an idea for the fix, I think so many warnings are just too many. Either we move the warning up to the user level function such that it is only given once per user call. Or we just don't warn.

NmaxControl = 20,
Nmax = 40,
n = 23,
Expand All @@ -113,7 +113,11 @@ test_that("h_predprobdist gives correct list", {
delta = 0.1,
relativeDelta = FALSE,
thetaT = 0.5
)
))
number_of_arms <- 2
number_of_warnings <- sum(length(result$posterior) * number_of_arms, number_of_arms)
expect_true(all(grepl("Weights have been corrected", warnings)))
expect_length(warnings, number_of_warnings)
expect_equal(result$result, 0.9322923, tolerance = 1e-4)
expect_identical(result$table, data.frame(counts = 0:17, cumul_counts = as.numeric(16:33)))

Expand Down Expand Up @@ -228,34 +232,47 @@ test_that("predprobDist gives the correct results in a two-arm study", {
})

test_that("predprobDist gives higher predictive probability when thetaT is lower in a single-arm trial", {
is_lower <- predprobDist(
x = 16,
n = 23,
xS = 5,
nS = 10,
Nmax = 40,
NmaxControl = 20,
delta = 0.1,
thetaT = 0.9,
parE = rbind(c(1, 1), c(50, 10)),
weights = c(2, 1),
parS = rbind(c(1, 1), c(20, 40)),
weightsS = c(2, 1)
expect_warnings(
is_lower <- predprobDist(
x = 16,
n = 23,
xS = 5,
nS = 10,
Nmax = 40,
NmaxControl = 20,
delta = 0.1,
thetaT = 0.9,
parE = rbind(c(1, 1), c(50, 10)),
weights = c(2, 1),
parS = rbind(c(1, 1), c(20, 40)),
weightsS = c(2, 1)
),
"Weights have been corrected"
)
is_higher <- predprobDist(
x = 16,
n = 23,
xS = 5,
nS = 10,
Nmax = 40,
NmaxControl = 20,
delta = 0.1,
thetaT = 0.5,
parE = rbind(c(1, 1), c(50, 10)),
weights = c(2, 1),
parS = rbind(c(1, 1), c(20, 40)),
weightsS = c(2, 1)

expect_warning(
is_higher <- predprobDist(
x = 16,
n = 23,
xS = 5,
nS = 10,
Nmax = 40,
NmaxControl = 20,
delta = 0.1,
thetaT = 0.5,
parE = rbind(c(1, 1), c(50, 10)),
weights = c(2, 1),
parS = rbind(c(1, 1), c(20, 40)),
weightsS = c(2, 1)
),
"Weights have been corrected"
)
expect_equal(
length(warnings_list),
50,
label = "Check total number of warnings"
)

expect_true(is_higher$result > is_lower$result)
})

Expand Down
Loading