Skip to content

Commit 91ab8b6

Browse files
committed
Drop the (broken) ability to disable broadcast when FC'ing a chan
`ChannelManager` has two public methods to force-close a (single) channel - `force_close_broadcasting_latest_txn` and `force_close_without_broadcasting_txn`. The second exists to allow a user who has restored a stale backup to (sort of) recover their funds by starting, force-closing channels without broadcasting their latest state, and only then reconnecting to peers (which would otherwise cause a panic due to the stale state). To my knowledge, no one has ever implemented this (fairly half-assed) recovery flow, and more importantly its rather substantially broken. `ChannelMonitor` has never tracked any concept of whether a channel is stale, and if we connect blocks towards an HTLC time-out it will immediately broadcast the stale state anyway. Finally, we really shouldn't encourage users to try to "recover" from a stale state by running an otherwise-operational node on the other end. Rather, we should formalize such a recovery flow by having a `ChainMonitor` variant that takes a list of `ChannelMonitor`s and claims available funds, dropping the `ChannelManager` entirely. While work continues towards that goal, having long-broken functionality in `ChannelManager` is also not acceptable, so here we simply remove the "without broadcasting" force-closure version. Fixes #2875
1 parent 43d66c5 commit 91ab8b6

File tree

5 files changed

+85
-136
lines changed

5 files changed

+85
-136
lines changed

lightning/src/events/mod.rs

Lines changed: 8 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -327,15 +327,10 @@ pub enum ClosureReason {
327327
/// Whether or not the latest transaction was broadcasted when the channel was force
328328
/// closed.
329329
///
330-
/// Channels closed using [`ChannelManager::force_close_broadcasting_latest_txn`] will have
331-
/// this field set to true, whereas channels closed using [`ChannelManager::force_close_without_broadcasting_txn`]
332-
/// or force-closed prior to being funded will have this field set to false.
330+
/// TODO: Update docs on when this will happen!
333331
///
334332
/// This will be `None` for objects generated or written by LDK 0.0.123 and
335333
/// earlier.
336-
///
337-
/// [`ChannelManager::force_close_broadcasting_latest_txn`]: crate::ln::channelmanager::ChannelManager::force_close_broadcasting_latest_txn.
338-
/// [`ChannelManager::force_close_without_broadcasting_txn`]: crate::ln::channelmanager::ChannelManager::force_close_without_broadcasting_txn.
339334
broadcasted_latest_txn: Option<bool>,
340335
/// The error message provided to [`ChannelManager::force_close_broadcasting_latest_txn`] or
341336
/// [`ChannelManager::force_close_all_channels_broadcasting_latest_txn`].
@@ -1484,7 +1479,7 @@ pub enum Event {
14841479
///
14851480
/// To accept the request (and in the case of a dual-funded channel, not contribute funds),
14861481
/// call [`ChannelManager::accept_inbound_channel`].
1487-
/// To reject the request, call [`ChannelManager::force_close_without_broadcasting_txn`].
1482+
/// To reject the request, call [`ChannelManager::force_close_broadcasting_latest_txn`].
14881483
/// Note that a ['ChannelClosed`] event will _not_ be triggered if the channel is rejected.
14891484
///
14901485
/// The event is only triggered when a new open channel request is received and the
@@ -1495,27 +1490,27 @@ pub enum Event {
14951490
/// returning `Err(ReplayEvent ())`) and won't be persisted across restarts.
14961491
///
14971492
/// [`ChannelManager::accept_inbound_channel`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel
1498-
/// [`ChannelManager::force_close_without_broadcasting_txn`]: crate::ln::channelmanager::ChannelManager::force_close_without_broadcasting_txn
1493+
/// [`ChannelManager::force_close_broadcasting_latest_txn`]: crate::ln::channelmanager::ChannelManager::force_close_broadcasting_latest_txn
14991494
/// [`UserConfig::manually_accept_inbound_channels`]: crate::util::config::UserConfig::manually_accept_inbound_channels
15001495
OpenChannelRequest {
15011496
/// The temporary channel ID of the channel requested to be opened.
15021497
///
15031498
/// When responding to the request, the `temporary_channel_id` should be passed
15041499
/// back to the ChannelManager through [`ChannelManager::accept_inbound_channel`] to accept,
1505-
/// or through [`ChannelManager::force_close_without_broadcasting_txn`] to reject.
1500+
/// or through [`ChannelManager::force_close_broadcasting_latest_txn`] to reject.
15061501
///
15071502
/// [`ChannelManager::accept_inbound_channel`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel
1508-
/// [`ChannelManager::force_close_without_broadcasting_txn`]: crate::ln::channelmanager::ChannelManager::force_close_without_broadcasting_txn
1503+
/// [`ChannelManager::force_close_broadcasting_latest_txn`]: crate::ln::channelmanager::ChannelManager::force_close_broadcasting_latest_txn
15091504
temporary_channel_id: ChannelId,
15101505
/// The node_id of the counterparty requesting to open the channel.
15111506
///
15121507
/// When responding to the request, the `counterparty_node_id` should be passed
15131508
/// back to the `ChannelManager` through [`ChannelManager::accept_inbound_channel`] to
1514-
/// accept the request, or through [`ChannelManager::force_close_without_broadcasting_txn`] to reject the
1515-
/// request.
1509+
/// accept the request, or through [`ChannelManager::force_close_broadcasting_latest_txn`]
1510+
/// to reject the request.
15161511
///
15171512
/// [`ChannelManager::accept_inbound_channel`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel
1518-
/// [`ChannelManager::force_close_without_broadcasting_txn`]: crate::ln::channelmanager::ChannelManager::force_close_without_broadcasting_txn
1513+
/// [`ChannelManager::force_close_broadcasting_latest_txn`]: crate::ln::channelmanager::ChannelManager::force_close_broadcasting_latest_txn
15191514
counterparty_node_id: PublicKey,
15201515
/// The channel value of the requested channel.
15211516
funding_satoshis: u64,

lightning/src/ln/channel.rs

Lines changed: 6 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8047,11 +8047,6 @@ where
80478047

80488048
/// May panic if some calls other than message-handling calls (which will all Err immediately)
80498049
/// have been called between remove_uncommitted_htlcs_and_mark_paused and this call.
8050-
///
8051-
/// Some links printed in log lines are included here to check them during build (when run with
8052-
/// `cargo doc --document-private-items`):
8053-
/// [`super::channelmanager::ChannelManager::force_close_without_broadcasting_txn`] and
8054-
/// [`super::channelmanager::ChannelManager::force_close_all_channels_without_broadcasting_txn`].
80558050
#[rustfmt::skip]
80568051
pub fn channel_reestablish<L: Deref, NS: Deref>(
80578052
&mut self, msg: &msgs::ChannelReestablish, logger: &L, node_signer: &NS,
@@ -8089,18 +8084,17 @@ where
80898084
if msg.next_remote_commitment_number > our_commitment_transaction {
80908085
macro_rules! log_and_panic {
80918086
($err_msg: expr) => {
8092-
log_error!(logger, $err_msg, &self.context.channel_id, log_pubkey!(self.context.counterparty_node_id));
8093-
panic!($err_msg, &self.context.channel_id, log_pubkey!(self.context.counterparty_node_id));
8087+
log_error!(logger, $err_msg);
8088+
panic!($err_msg);
80948089
}
80958090
}
80968091
log_and_panic!("We have fallen behind - we have received proof that if we broadcast our counterparty is going to claim all our funds.\n\
80978092
This implies you have restarted with lost ChannelMonitor and ChannelManager state, the first of which is a violation of the LDK chain::Watch requirements.\n\
80988093
More specifically, this means you have a bug in your implementation that can cause loss of funds, or you are running with an old backup, which is unsafe.\n\
8099-
If you have restored from an old backup and wish to force-close channels and return to operation, you should start up, call\n\
8100-
ChannelManager::force_close_without_broadcasting_txn on channel {} with counterparty {} or\n\
8101-
ChannelManager::force_close_all_channels_without_broadcasting_txn, then reconnect to peer(s).\n\
8102-
Note that due to a long-standing bug in lnd you may have to reach out to peers running lnd-based nodes to ask them to manually force-close channels\n\
8103-
See https://github.com/lightningdevkit/rust-lightning/issues/1565 for more info.");
8094+
If you have restored from an old backup and wish to claim any available funds, you should restart with\n\
8095+
an empty ChannelManager and no ChannelMonitors, reconnect to peer(s), ensure they've force-closed all of your\n\
8096+
previous channels and that the closure transaction(s) have confirmed on-chain,\n\
8097+
then restart with an empty ChannelManager and the latest ChannelMonitors that you do have.");
81048098
}
81058099
}
81068100

lightning/src/ln/channelmanager.rs

Lines changed: 66 additions & 107 deletions
Original file line numberDiff line numberDiff line change
@@ -940,6 +940,16 @@ impl MsgHandleErrInternal {
940940
Self { err, closes_channel: false, shutdown_finish: None }
941941
}
942942

943+
fn dont_send_error_message(&mut self) {
944+
match &mut self.err.action {
945+
msgs::ErrorAction::DisconnectPeer { msg } => *msg = None,
946+
msgs::ErrorAction::SendErrorMessage { msg: _ } => {
947+
self.err.action = msgs::ErrorAction::IgnoreError;
948+
},
949+
_ => {},
950+
}
951+
}
952+
943953
fn closes_channel(&self) -> bool {
944954
self.closes_channel
945955
}
@@ -1990,7 +2000,7 @@ where
19902000
/// match event {
19912001
/// Event::OpenChannelRequest { temporary_channel_id, counterparty_node_id, .. } => {
19922002
/// if !is_trusted(counterparty_node_id) {
1993-
/// match channel_manager.force_close_without_broadcasting_txn(
2003+
/// match channel_manager.force_close_broadcasting_latest_txn(
19942004
/// &temporary_channel_id, &counterparty_node_id, error_message.to_string()
19952005
/// ) {
19962006
/// Ok(()) => println!("Rejecting channel {}", temporary_channel_id),
@@ -4350,85 +4360,66 @@ where
43504360
/// `peer_msg` should be set when we receive a message from a peer, but not set when the
43514361
/// user closes, which will be re-exposed as the `ChannelClosed` reason.
43524362
#[rustfmt::skip]
4353-
fn force_close_channel_with_peer(&self, channel_id: &ChannelId, peer_node_id: &PublicKey, peer_msg: Option<&String>, broadcast: bool)
4363+
fn force_close_channel_with_peer(&self, channel_id: &ChannelId, peer_node_id: &PublicKey, reason: ClosureReason)
43544364
-> Result<(), APIError> {
43554365
let per_peer_state = self.per_peer_state.read().unwrap();
43564366
let peer_state_mutex = per_peer_state.get(peer_node_id)
43574367
.ok_or_else(|| APIError::ChannelUnavailable { err: format!("Can't find a peer matching the passed counterparty node_id {}", peer_node_id) })?;
4358-
let update_opt = {
4359-
let mut peer_state = peer_state_mutex.lock().unwrap();
4360-
let closure_reason = if let Some(peer_msg) = peer_msg {
4361-
ClosureReason::CounterpartyForceClosed { peer_msg: UntrustedString(peer_msg.to_string()) }
4362-
} else {
4363-
ClosureReason::HolderForceClosed {
4364-
broadcasted_latest_txn: Some(broadcast),
4365-
message: "Channel force-closed".to_owned(), // TODO
4366-
}
4367-
};
4368-
let logger = WithContext::from(&self.logger, Some(*peer_node_id), Some(*channel_id), None);
4369-
if let hash_map::Entry::Occupied(mut chan_entry) = peer_state.channel_by_id.entry(channel_id.clone()) {
4370-
log_error!(logger, "Force-closing channel {}", channel_id);
4371-
let (mut shutdown_res, update_opt) = match chan_entry.get_mut().as_funded_mut() {
4372-
Some(chan) => {
4373-
(
4374-
chan.context.force_shutdown(&chan.funding, broadcast, closure_reason),
4375-
self.get_channel_update_for_broadcast(&chan).ok(),
4376-
)
4377-
},
4378-
None => {
4379-
// Unfunded channel has no update
4380-
(chan_entry.get_mut().force_shutdown(false, closure_reason), None)
4381-
},
4382-
};
4383-
remove_channel_entry!(self, peer_state, chan_entry, shutdown_res);
4384-
mem::drop(peer_state);
4385-
mem::drop(per_peer_state);
4386-
self.finish_close_channel(shutdown_res);
4387-
update_opt
4388-
} else if peer_state.inbound_channel_request_by_id.remove(channel_id).is_some() {
4389-
log_error!(logger, "Force-closing channel {}", &channel_id);
4390-
// N.B. that we don't send any channel close event here: we
4391-
// don't have a user_channel_id, and we never sent any opening
4392-
// events anyway.
4393-
None
4394-
} else {
4395-
return Err(APIError::ChannelUnavailable{ err: format!("Channel with id {} not found for the passed counterparty node_id {}", channel_id, peer_node_id) });
4396-
}
4368+
let mut peer_state_lock = peer_state_mutex.lock().unwrap();
4369+
let peer_state = &mut *peer_state_lock;
4370+
let logger = WithContext::from(&self.logger, Some(*peer_node_id), Some(*channel_id), None);
4371+
4372+
let is_from_counterparty = matches!(reason, ClosureReason::CounterpartyForceClosed { .. });
4373+
let message = match &reason {
4374+
ClosureReason::HolderForceClosed { message, .. } => message.clone(),
4375+
_ => reason.to_string(),
43974376
};
4398-
if let Some(update) = update_opt {
4399-
// If we have some Channel Update to broadcast, we cache it and broadcast it later.
4400-
let mut pending_broadcast_messages = self.pending_broadcast_messages.lock().unwrap();
4401-
pending_broadcast_messages.push(MessageSendEvent::BroadcastChannelUpdate {
4402-
msg: update
4403-
});
4377+
4378+
if let Some(mut chan) = peer_state.channel_by_id.remove(channel_id) {
4379+
log_error!(logger, "Force-closing channel {}", channel_id);
4380+
let err = ChannelError::Close((message, reason));
4381+
let (_, mut e) = convert_channel_err!(self, peer_state, err, chan, channel_id);
4382+
mem::drop(peer_state_lock);
4383+
mem::drop(per_peer_state);
4384+
if is_from_counterparty {
4385+
// If the peer is the one who asked us to force-close, don't reply with a fresh
4386+
// error message.
4387+
e.dont_send_error_message();
4388+
}
4389+
let _ = handle_error!(self, Err::<(), _>(e), *peer_node_id);
4390+
Ok(())
4391+
} else if peer_state.inbound_channel_request_by_id.remove(channel_id).is_some() {
4392+
log_error!(logger, "Force-closing inbound channel request {}", &channel_id);
4393+
if !is_from_counterparty {
4394+
peer_state.pending_msg_events.push(
4395+
MessageSendEvent::HandleError {
4396+
node_id: *peer_node_id,
4397+
action: msgs::ErrorAction::SendErrorMessage {
4398+
msg: msgs::ErrorMessage { channel_id: *channel_id, data: message }
4399+
},
4400+
}
4401+
);
4402+
}
4403+
// N.B. that we don't send any channel close event here: we
4404+
// don't have a user_channel_id, and we never sent any opening
4405+
// events anyway.
4406+
Ok(())
4407+
} else {
4408+
Err(APIError::ChannelUnavailable{ err: format!("Channel with id {} not found for the passed counterparty node_id {}", channel_id, peer_node_id) })
44044409
}
4405-
Ok(())
44064410
}
44074411

44084412
#[rustfmt::skip]
4409-
fn force_close_sending_error(&self, channel_id: &ChannelId, counterparty_node_id: &PublicKey, broadcast: bool, error_message: String)
4413+
fn force_close_sending_error(&self, channel_id: &ChannelId, counterparty_node_id: &PublicKey, error_message: String)
44104414
-> Result<(), APIError> {
44114415
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
44124416
log_debug!(self.logger,
44134417
"Force-closing channel, The error message sent to the peer : {}", error_message);
4414-
match self.force_close_channel_with_peer(channel_id, &counterparty_node_id, None, broadcast) {
4415-
Ok(()) => {
4416-
let per_peer_state = self.per_peer_state.read().unwrap();
4417-
if let Some(peer_state_mutex) = per_peer_state.get(counterparty_node_id) {
4418-
let mut peer_state = peer_state_mutex.lock().unwrap();
4419-
peer_state.pending_msg_events.push(
4420-
MessageSendEvent::HandleError {
4421-
node_id: *counterparty_node_id,
4422-
action: msgs::ErrorAction::SendErrorMessage {
4423-
msg: msgs::ErrorMessage { channel_id: *channel_id, data: error_message }
4424-
},
4425-
}
4426-
);
4427-
}
4428-
Ok(())
4429-
},
4430-
Err(e) => Err(e)
4431-
}
4418+
let reason = ClosureReason::HolderForceClosed {
4419+
broadcasted_latest_txn: Some(true),
4420+
message: error_message,
4421+
};
4422+
self.force_close_channel_with_peer(channel_id, &counterparty_node_id, reason)
44324423
}
44334424

44344425
/// Force closes a channel, immediately broadcasting the latest local transaction(s),
@@ -4442,23 +4433,7 @@ where
44424433
pub fn force_close_broadcasting_latest_txn(
44434434
&self, channel_id: &ChannelId, counterparty_node_id: &PublicKey, error_message: String,
44444435
) -> Result<(), APIError> {
4445-
self.force_close_sending_error(channel_id, counterparty_node_id, true, error_message)
4446-
}
4447-
4448-
/// Force closes a channel, rejecting new HTLCs on the given channel but skips broadcasting
4449-
/// the latest local transaction(s).
4450-
///
4451-
/// The provided `error_message` is sent to connected peers for closing channels and should
4452-
/// be a human-readable description of what went wrong.
4453-
///
4454-
/// Fails if `channel_id` is unknown to the manager, or if the
4455-
/// `counterparty_node_id` isn't the counterparty of the corresponding channel.
4456-
/// You can always broadcast the latest local transaction(s) via
4457-
/// [`ChannelMonitor::broadcast_latest_holder_commitment_txn`].
4458-
pub fn force_close_without_broadcasting_txn(
4459-
&self, channel_id: &ChannelId, counterparty_node_id: &PublicKey, error_message: String,
4460-
) -> Result<(), APIError> {
4461-
self.force_close_sending_error(channel_id, counterparty_node_id, false, error_message)
4436+
self.force_close_sending_error(channel_id, counterparty_node_id, error_message)
44624437
}
44634438

44644439
/// Force close all channels, immediately broadcasting the latest local commitment transaction
@@ -4476,21 +4451,6 @@ where
44764451
}
44774452
}
44784453

4479-
/// Force close all channels rejecting new HTLCs on each but without broadcasting the latest
4480-
/// local transaction(s).
4481-
///
4482-
/// The provided `error_message` is sent to connected peers for closing channels and
4483-
/// should be a human-readable description of what went wrong.
4484-
pub fn force_close_all_channels_without_broadcasting_txn(&self, error_message: String) {
4485-
for chan in self.list_channels() {
4486-
let _ = self.force_close_without_broadcasting_txn(
4487-
&chan.channel_id,
4488-
&chan.counterparty.node_id,
4489-
error_message.clone(),
4490-
);
4491-
}
4492-
}
4493-
44944454
/// Initiate a splice, to change the channel capacity of an existing funded channel.
44954455
/// After completion of splicing, the funding transaction will be replaced by a new one, spending the old funding transaction,
44964456
/// with optional extra inputs (splice-in) and/or extra outputs (splice-out or change).
@@ -12993,6 +12953,9 @@ where
1299312953

1299412954
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
1299512955

12956+
let peer_msg = UntrustedString(msg.data.clone());
12957+
let reason = ClosureReason::CounterpartyForceClosed { peer_msg };
12958+
1299612959
if msg.channel_id.is_zero() {
1299712960
let channel_ids: Vec<ChannelId> = {
1299812961
let per_peer_state = self.per_peer_state.read().unwrap();
@@ -13007,7 +12970,7 @@ where
1300712970
};
1300812971
for channel_id in channel_ids {
1300912972
// Untrusted messages from peer, we throw away the error if id points to a non-existent channel
13010-
let _ = self.force_close_channel_with_peer(&channel_id, &counterparty_node_id, Some(&msg.data), true);
12973+
let _ = self.force_close_channel_with_peer(&channel_id, &counterparty_node_id, reason.clone());
1301112974
}
1301212975
} else {
1301312976
{
@@ -13043,7 +13006,7 @@ where
1304313006
}
1304413007

1304513008
// Untrusted messages from peer, we throw away the error if id points to a non-existent channel
13046-
let _ = self.force_close_channel_with_peer(&msg.channel_id, &counterparty_node_id, Some(&msg.data), true);
13009+
let _ = self.force_close_channel_with_peer(&msg.channel_id, &counterparty_node_id, reason);
1304713010
}
1304813011
}
1304913012

@@ -16346,9 +16309,9 @@ mod tests {
1634616309

1634716310
let chan = create_announced_chan_between_nodes(&nodes, 0, 1);
1634816311

16349-
nodes[0].node.force_close_channel_with_peer(&chan.2, &nodes[1].node.get_our_node_id(), None, true).unwrap();
16350-
check_added_monitors!(nodes[0], 1);
1635116312
let message = "Channel force-closed".to_owned();
16313+
nodes[0].node.force_close_broadcasting_latest_txn(&chan.2, &nodes[1].node.get_our_node_id(), message.clone()).unwrap();
16314+
check_added_monitors!(nodes[0], 1);
1635216315
let reason = ClosureReason::HolderForceClosed { broadcasted_latest_txn: Some(true), message };
1635316316
check_closed_event!(nodes[0], 1, reason, [nodes[1].node.get_our_node_id()], 100000);
1635416317

@@ -16552,8 +16515,6 @@ mod tests {
1655216515

1655316516
check_unkown_peer_error(nodes[0].node.force_close_broadcasting_latest_txn(&channel_id, &unkown_public_key, error_message.to_string()), unkown_public_key);
1655416517

16555-
check_unkown_peer_error(nodes[0].node.force_close_without_broadcasting_txn(&channel_id, &unkown_public_key, error_message.to_string()), unkown_public_key);
16556-
1655716518
check_unkown_peer_error(nodes[0].node.forward_intercepted_htlc(intercept_id, &channel_id, unkown_public_key, 1_000_000), unkown_public_key);
1655816519

1655916520
check_unkown_peer_error(nodes[0].node.update_channel_config(&unkown_public_key, &[channel_id], &ChannelConfig::default()), unkown_public_key);
@@ -16584,8 +16545,6 @@ mod tests {
1658416545

1658516546
check_channel_unavailable_error(nodes[0].node.force_close_broadcasting_latest_txn(&channel_id, &counterparty_node_id, error_message.to_string()), channel_id, counterparty_node_id);
1658616547

16587-
check_channel_unavailable_error(nodes[0].node.force_close_without_broadcasting_txn(&channel_id, &counterparty_node_id, error_message.to_string()), channel_id, counterparty_node_id);
16588-
1658916548
check_channel_unavailable_error(nodes[0].node.forward_intercepted_htlc(InterceptId([0; 32]), &channel_id, counterparty_node_id, 1_000_000), channel_id, counterparty_node_id);
1659016549

1659116550
check_channel_unavailable_error(nodes[0].node.update_channel_config(&counterparty_node_id, &[channel_id], &ChannelConfig::default()), channel_id, counterparty_node_id);

lightning/src/ln/functional_test_utils.rs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1722,6 +1722,7 @@ pub fn check_closed_broadcast(node: &Node, num_channels: usize, with_error_msg:
17221722
dummy_connected = true;
17231723
}
17241724
let msg_events = node.node.get_and_clear_pending_msg_events();
1725+
eprintln!("{:?}", msg_events);
17251726
assert_eq!(msg_events.len(), if with_error_msg { num_channels * 2 } else { num_channels });
17261727
if dummy_connected {
17271728
disconnect_dummy_node(&node);

0 commit comments

Comments
 (0)