Skip to content

Commit 524361d

Browse files
committed
Drop the (broken) ability to disable broadcast when FC'ing a chan
`ChannelManager` has two public methods to force-close a (single) channel - `force_close_broadcasting_latest_txn` and `force_close_without_broadcasting_txn`. The second exists to allow a user who has restored a stale backup to (sort of) recover their funds by starting, force-closing channels without broadcasting their latest state, and only then reconnecting to peers (which would otherwise cause a panic due to the stale state). To my knowledge, no one has ever implemented this (fairly half-assed) recovery flow, and more importantly its rather substantially broken. `ChannelMonitor` has never tracked any concept of whether a channel is stale, and if we connect blocks towards an HTLC time-out it will immediately broadcast the stale state anyway. Finally, we really shouldn't encourage users to try to "recover" from a stale state by running an otherwise-operational node on the other end. Rather, we should formalize such a recovery flow by having a `ChainMonitor` variant that takes a list of `ChannelMonitor`s and claims available funds, dropping the `ChannelManager` entirely. While work continues towards that goal, having long-broken functionality in `ChannelManager` is also not acceptable, so here we simply remove the "without broadcasting" force-closure version. Fixes #2875
1 parent 7306a8d commit 524361d

File tree

4 files changed

+84
-136
lines changed

4 files changed

+84
-136
lines changed

lightning/src/events/mod.rs

Lines changed: 8 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -327,15 +327,10 @@ pub enum ClosureReason {
327327
/// Whether or not the latest transaction was broadcasted when the channel was force
328328
/// closed.
329329
///
330-
/// Channels closed using [`ChannelManager::force_close_broadcasting_latest_txn`] will have
331-
/// this field set to true, whereas channels closed using [`ChannelManager::force_close_without_broadcasting_txn`]
332-
/// or force-closed prior to being funded will have this field set to false.
330+
/// TODO: Update docs on when this will happen!
333331
///
334332
/// This will be `None` for objects generated or written by LDK 0.0.123 and
335333
/// earlier.
336-
///
337-
/// [`ChannelManager::force_close_broadcasting_latest_txn`]: crate::ln::channelmanager::ChannelManager::force_close_broadcasting_latest_txn.
338-
/// [`ChannelManager::force_close_without_broadcasting_txn`]: crate::ln::channelmanager::ChannelManager::force_close_without_broadcasting_txn.
339334
broadcasted_latest_txn: Option<bool>,
340335
/// The error message provided to [`ChannelManager::force_close_broadcasting_latest_txn`] or
341336
/// [`ChannelManager::force_close_all_channels_broadcasting_latest_txn`].
@@ -1486,7 +1481,7 @@ pub enum Event {
14861481
///
14871482
/// To accept the request (and in the case of a dual-funded channel, not contribute funds),
14881483
/// call [`ChannelManager::accept_inbound_channel`].
1489-
/// To reject the request, call [`ChannelManager::force_close_without_broadcasting_txn`].
1484+
/// To reject the request, call [`ChannelManager::force_close_broadcasting_latest_txn`].
14901485
/// Note that a ['ChannelClosed`] event will _not_ be triggered if the channel is rejected.
14911486
///
14921487
/// The event is only triggered when a new open channel request is received and the
@@ -1497,27 +1492,27 @@ pub enum Event {
14971492
/// returning `Err(ReplayEvent ())`) and won't be persisted across restarts.
14981493
///
14991494
/// [`ChannelManager::accept_inbound_channel`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel
1500-
/// [`ChannelManager::force_close_without_broadcasting_txn`]: crate::ln::channelmanager::ChannelManager::force_close_without_broadcasting_txn
1495+
/// [`ChannelManager::force_close_broadcasting_latest_txn`]: crate::ln::channelmanager::ChannelManager::force_close_broadcasting_latest_txn
15011496
/// [`UserConfig::manually_accept_inbound_channels`]: crate::util::config::UserConfig::manually_accept_inbound_channels
15021497
OpenChannelRequest {
15031498
/// The temporary channel ID of the channel requested to be opened.
15041499
///
15051500
/// When responding to the request, the `temporary_channel_id` should be passed
15061501
/// back to the ChannelManager through [`ChannelManager::accept_inbound_channel`] to accept,
1507-
/// or through [`ChannelManager::force_close_without_broadcasting_txn`] to reject.
1502+
/// or through [`ChannelManager::force_close_broadcasting_latest_txn`] to reject.
15081503
///
15091504
/// [`ChannelManager::accept_inbound_channel`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel
1510-
/// [`ChannelManager::force_close_without_broadcasting_txn`]: crate::ln::channelmanager::ChannelManager::force_close_without_broadcasting_txn
1505+
/// [`ChannelManager::force_close_broadcasting_latest_txn`]: crate::ln::channelmanager::ChannelManager::force_close_broadcasting_latest_txn
15111506
temporary_channel_id: ChannelId,
15121507
/// The node_id of the counterparty requesting to open the channel.
15131508
///
15141509
/// When responding to the request, the `counterparty_node_id` should be passed
15151510
/// back to the `ChannelManager` through [`ChannelManager::accept_inbound_channel`] to
1516-
/// accept the request, or through [`ChannelManager::force_close_without_broadcasting_txn`] to reject the
1517-
/// request.
1511+
/// accept the request, or through [`ChannelManager::force_close_broadcasting_latest_txn`]
1512+
/// to reject the request.
15181513
///
15191514
/// [`ChannelManager::accept_inbound_channel`]: crate::ln::channelmanager::ChannelManager::accept_inbound_channel
1520-
/// [`ChannelManager::force_close_without_broadcasting_txn`]: crate::ln::channelmanager::ChannelManager::force_close_without_broadcasting_txn
1515+
/// [`ChannelManager::force_close_broadcasting_latest_txn`]: crate::ln::channelmanager::ChannelManager::force_close_broadcasting_latest_txn
15211516
counterparty_node_id: PublicKey,
15221517
/// The channel value of the requested channel.
15231518
funding_satoshis: u64,

lightning/src/ln/channel.rs

Lines changed: 6 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8282,11 +8282,6 @@ where
82828282

82838283
/// May panic if some calls other than message-handling calls (which will all Err immediately)
82848284
/// have been called between remove_uncommitted_htlcs_and_mark_paused and this call.
8285-
///
8286-
/// Some links printed in log lines are included here to check them during build (when run with
8287-
/// `cargo doc --document-private-items`):
8288-
/// [`super::channelmanager::ChannelManager::force_close_without_broadcasting_txn`] and
8289-
/// [`super::channelmanager::ChannelManager::force_close_all_channels_without_broadcasting_txn`].
82908285
#[rustfmt::skip]
82918286
pub fn channel_reestablish<L: Deref, NS: Deref>(
82928287
&mut self, msg: &msgs::ChannelReestablish, logger: &L, node_signer: &NS,
@@ -8324,18 +8319,17 @@ where
83248319
if msg.next_remote_commitment_number > our_commitment_transaction {
83258320
macro_rules! log_and_panic {
83268321
($err_msg: expr) => {
8327-
log_error!(logger, $err_msg, &self.context.channel_id, log_pubkey!(self.context.counterparty_node_id));
8328-
panic!($err_msg, &self.context.channel_id, log_pubkey!(self.context.counterparty_node_id));
8322+
log_error!(logger, $err_msg);
8323+
panic!($err_msg);
83298324
}
83308325
}
83318326
log_and_panic!("We have fallen behind - we have received proof that if we broadcast our counterparty is going to claim all our funds.\n\
83328327
This implies you have restarted with lost ChannelMonitor and ChannelManager state, the first of which is a violation of the LDK chain::Watch requirements.\n\
83338328
More specifically, this means you have a bug in your implementation that can cause loss of funds, or you are running with an old backup, which is unsafe.\n\
8334-
If you have restored from an old backup and wish to force-close channels and return to operation, you should start up, call\n\
8335-
ChannelManager::force_close_without_broadcasting_txn on channel {} with counterparty {} or\n\
8336-
ChannelManager::force_close_all_channels_without_broadcasting_txn, then reconnect to peer(s).\n\
8337-
Note that due to a long-standing bug in lnd you may have to reach out to peers running lnd-based nodes to ask them to manually force-close channels\n\
8338-
See https://github.com/lightningdevkit/rust-lightning/issues/1565 for more info.");
8329+
If you have restored from an old backup and wish to claim any available funds, you should restart with\n\
8330+
an empty ChannelManager and no ChannelMonitors, reconnect to peer(s), ensure they've force-closed all of your\n\
8331+
previous channels and that the closure transaction(s) have confirmed on-chain,\n\
8332+
then restart with an empty ChannelManager and the latest ChannelMonitors that you do have.");
83398333
}
83408334
}
83418335

lightning/src/ln/channelmanager.rs

Lines changed: 66 additions & 107 deletions
Original file line numberDiff line numberDiff line change
@@ -942,6 +942,16 @@ impl MsgHandleErrInternal {
942942
Self { err, closes_channel: false, shutdown_finish: None }
943943
}
944944

945+
fn dont_send_error_message(&mut self) {
946+
match &mut self.err.action {
947+
msgs::ErrorAction::DisconnectPeer { msg } => *msg = None,
948+
msgs::ErrorAction::SendErrorMessage { msg: _ } => {
949+
self.err.action = msgs::ErrorAction::IgnoreError;
950+
},
951+
_ => {},
952+
}
953+
}
954+
945955
fn closes_channel(&self) -> bool {
946956
self.closes_channel
947957
}
@@ -1992,7 +2002,7 @@ where
19922002
/// match event {
19932003
/// Event::OpenChannelRequest { temporary_channel_id, counterparty_node_id, .. } => {
19942004
/// if !is_trusted(counterparty_node_id) {
1995-
/// match channel_manager.force_close_without_broadcasting_txn(
2005+
/// match channel_manager.force_close_broadcasting_latest_txn(
19962006
/// &temporary_channel_id, &counterparty_node_id, error_message.to_string()
19972007
/// ) {
19982008
/// Ok(()) => println!("Rejecting channel {}", temporary_channel_id),
@@ -4352,85 +4362,66 @@ where
43524362
/// `peer_msg` should be set when we receive a message from a peer, but not set when the
43534363
/// user closes, which will be re-exposed as the `ChannelClosed` reason.
43544364
#[rustfmt::skip]
4355-
fn force_close_channel_with_peer(&self, channel_id: &ChannelId, peer_node_id: &PublicKey, peer_msg: Option<&String>, broadcast: bool)
4365+
fn force_close_channel_with_peer(&self, channel_id: &ChannelId, peer_node_id: &PublicKey, reason: ClosureReason)
43564366
-> Result<(), APIError> {
43574367
let per_peer_state = self.per_peer_state.read().unwrap();
43584368
let peer_state_mutex = per_peer_state.get(peer_node_id)
43594369
.ok_or_else(|| APIError::ChannelUnavailable { err: format!("Can't find a peer matching the passed counterparty node_id {}", peer_node_id) })?;
4360-
let update_opt = {
4361-
let mut peer_state = peer_state_mutex.lock().unwrap();
4362-
let closure_reason = if let Some(peer_msg) = peer_msg {
4363-
ClosureReason::CounterpartyForceClosed { peer_msg: UntrustedString(peer_msg.to_string()) }
4364-
} else {
4365-
ClosureReason::HolderForceClosed {
4366-
broadcasted_latest_txn: Some(broadcast),
4367-
message: "Channel force-closed".to_owned(), // TODO
4368-
}
4369-
};
4370-
let logger = WithContext::from(&self.logger, Some(*peer_node_id), Some(*channel_id), None);
4371-
if let hash_map::Entry::Occupied(mut chan_entry) = peer_state.channel_by_id.entry(channel_id.clone()) {
4372-
log_error!(logger, "Force-closing channel {}", channel_id);
4373-
let (mut shutdown_res, update_opt) = match chan_entry.get_mut().as_funded_mut() {
4374-
Some(chan) => {
4375-
(
4376-
chan.context.force_shutdown(&chan.funding, broadcast, closure_reason),
4377-
self.get_channel_update_for_broadcast(&chan).ok(),
4378-
)
4379-
},
4380-
None => {
4381-
// Unfunded channel has no update
4382-
(chan_entry.get_mut().force_shutdown(false, closure_reason), None)
4383-
},
4384-
};
4385-
remove_channel_entry!(self, peer_state, chan_entry, shutdown_res);
4386-
mem::drop(peer_state);
4387-
mem::drop(per_peer_state);
4388-
self.finish_close_channel(shutdown_res);
4389-
update_opt
4390-
} else if peer_state.inbound_channel_request_by_id.remove(channel_id).is_some() {
4391-
log_error!(logger, "Force-closing channel {}", &channel_id);
4392-
// N.B. that we don't send any channel close event here: we
4393-
// don't have a user_channel_id, and we never sent any opening
4394-
// events anyway.
4395-
None
4396-
} else {
4397-
return Err(APIError::ChannelUnavailable{ err: format!("Channel with id {} not found for the passed counterparty node_id {}", channel_id, peer_node_id) });
4398-
}
4370+
let mut peer_state_lock = peer_state_mutex.lock().unwrap();
4371+
let peer_state = &mut *peer_state_lock;
4372+
let logger = WithContext::from(&self.logger, Some(*peer_node_id), Some(*channel_id), None);
4373+
4374+
let is_from_counterparty = matches!(reason, ClosureReason::CounterpartyForceClosed { .. });
4375+
let message = match &reason {
4376+
ClosureReason::HolderForceClosed { message, .. } => message.clone(),
4377+
_ => reason.to_string(),
43994378
};
4400-
if let Some(update) = update_opt {
4401-
// If we have some Channel Update to broadcast, we cache it and broadcast it later.
4402-
let mut pending_broadcast_messages = self.pending_broadcast_messages.lock().unwrap();
4403-
pending_broadcast_messages.push(MessageSendEvent::BroadcastChannelUpdate {
4404-
msg: update
4405-
});
4379+
4380+
if let Some(mut chan) = peer_state.channel_by_id.remove(channel_id) {
4381+
log_error!(logger, "Force-closing channel {}", channel_id);
4382+
let err = ChannelError::Close((message, reason));
4383+
let (_, mut e) = convert_channel_err!(self, peer_state, err, chan, channel_id);
4384+
mem::drop(peer_state_lock);
4385+
mem::drop(per_peer_state);
4386+
if is_from_counterparty {
4387+
// If the peer is the one who asked us to force-close, don't reply with a fresh
4388+
// error message.
4389+
e.dont_send_error_message();
4390+
}
4391+
let _ = handle_error!(self, Err::<(), _>(e), *peer_node_id);
4392+
Ok(())
4393+
} else if peer_state.inbound_channel_request_by_id.remove(channel_id).is_some() {
4394+
log_error!(logger, "Force-closing inbound channel request {}", &channel_id);
4395+
if !is_from_counterparty {
4396+
peer_state.pending_msg_events.push(
4397+
MessageSendEvent::HandleError {
4398+
node_id: *peer_node_id,
4399+
action: msgs::ErrorAction::SendErrorMessage {
4400+
msg: msgs::ErrorMessage { channel_id: *channel_id, data: message }
4401+
},
4402+
}
4403+
);
4404+
}
4405+
// N.B. that we don't send any channel close event here: we
4406+
// don't have a user_channel_id, and we never sent any opening
4407+
// events anyway.
4408+
Ok(())
4409+
} else {
4410+
Err(APIError::ChannelUnavailable{ err: format!("Channel with id {} not found for the passed counterparty node_id {}", channel_id, peer_node_id) })
44064411
}
4407-
Ok(())
44084412
}
44094413

44104414
#[rustfmt::skip]
4411-
fn force_close_sending_error(&self, channel_id: &ChannelId, counterparty_node_id: &PublicKey, broadcast: bool, error_message: String)
4415+
fn force_close_sending_error(&self, channel_id: &ChannelId, counterparty_node_id: &PublicKey, error_message: String)
44124416
-> Result<(), APIError> {
44134417
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
44144418
log_debug!(self.logger,
44154419
"Force-closing channel, The error message sent to the peer : {}", error_message);
4416-
match self.force_close_channel_with_peer(channel_id, &counterparty_node_id, None, broadcast) {
4417-
Ok(()) => {
4418-
let per_peer_state = self.per_peer_state.read().unwrap();
4419-
if let Some(peer_state_mutex) = per_peer_state.get(counterparty_node_id) {
4420-
let mut peer_state = peer_state_mutex.lock().unwrap();
4421-
peer_state.pending_msg_events.push(
4422-
MessageSendEvent::HandleError {
4423-
node_id: *counterparty_node_id,
4424-
action: msgs::ErrorAction::SendErrorMessage {
4425-
msg: msgs::ErrorMessage { channel_id: *channel_id, data: error_message }
4426-
},
4427-
}
4428-
);
4429-
}
4430-
Ok(())
4431-
},
4432-
Err(e) => Err(e)
4433-
}
4420+
let reason = ClosureReason::HolderForceClosed {
4421+
broadcasted_latest_txn: Some(true),
4422+
message: error_message,
4423+
};
4424+
self.force_close_channel_with_peer(channel_id, &counterparty_node_id, reason)
44344425
}
44354426

44364427
/// Force closes a channel, immediately broadcasting the latest local transaction(s),
@@ -4444,23 +4435,7 @@ where
44444435
pub fn force_close_broadcasting_latest_txn(
44454436
&self, channel_id: &ChannelId, counterparty_node_id: &PublicKey, error_message: String,
44464437
) -> Result<(), APIError> {
4447-
self.force_close_sending_error(channel_id, counterparty_node_id, true, error_message)
4448-
}
4449-
4450-
/// Force closes a channel, rejecting new HTLCs on the given channel but skips broadcasting
4451-
/// the latest local transaction(s).
4452-
///
4453-
/// The provided `error_message` is sent to connected peers for closing channels and should
4454-
/// be a human-readable description of what went wrong.
4455-
///
4456-
/// Fails if `channel_id` is unknown to the manager, or if the
4457-
/// `counterparty_node_id` isn't the counterparty of the corresponding channel.
4458-
/// You can always broadcast the latest local transaction(s) via
4459-
/// [`ChannelMonitor::broadcast_latest_holder_commitment_txn`].
4460-
pub fn force_close_without_broadcasting_txn(
4461-
&self, channel_id: &ChannelId, counterparty_node_id: &PublicKey, error_message: String,
4462-
) -> Result<(), APIError> {
4463-
self.force_close_sending_error(channel_id, counterparty_node_id, false, error_message)
4438+
self.force_close_sending_error(channel_id, counterparty_node_id, error_message)
44644439
}
44654440

44664441
/// Force close all channels, immediately broadcasting the latest local commitment transaction
@@ -4478,21 +4453,6 @@ where
44784453
}
44794454
}
44804455

4481-
/// Force close all channels rejecting new HTLCs on each but without broadcasting the latest
4482-
/// local transaction(s).
4483-
///
4484-
/// The provided `error_message` is sent to connected peers for closing channels and
4485-
/// should be a human-readable description of what went wrong.
4486-
pub fn force_close_all_channels_without_broadcasting_txn(&self, error_message: String) {
4487-
for chan in self.list_channels() {
4488-
let _ = self.force_close_without_broadcasting_txn(
4489-
&chan.channel_id,
4490-
&chan.counterparty.node_id,
4491-
error_message.clone(),
4492-
);
4493-
}
4494-
}
4495-
44964456
/// Initiate a splice, to change the channel capacity of an existing funded channel.
44974457
/// After completion of splicing, the funding transaction will be replaced by a new one, spending the old funding transaction,
44984458
/// with optional extra inputs (splice-in) and/or extra outputs (splice-out or change).
@@ -13069,6 +13029,9 @@ where
1306913029

1307013030
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
1307113031

13032+
let peer_msg = UntrustedString(msg.data.clone());
13033+
let reason = ClosureReason::CounterpartyForceClosed { peer_msg };
13034+
1307213035
if msg.channel_id.is_zero() {
1307313036
let channel_ids: Vec<ChannelId> = {
1307413037
let per_peer_state = self.per_peer_state.read().unwrap();
@@ -13083,7 +13046,7 @@ where
1308313046
};
1308413047
for channel_id in channel_ids {
1308513048
// Untrusted messages from peer, we throw away the error if id points to a non-existent channel
13086-
let _ = self.force_close_channel_with_peer(&channel_id, &counterparty_node_id, Some(&msg.data), true);
13049+
let _ = self.force_close_channel_with_peer(&channel_id, &counterparty_node_id, reason.clone());
1308713050
}
1308813051
} else {
1308913052
{
@@ -13119,7 +13082,7 @@ where
1311913082
}
1312013083

1312113084
// Untrusted messages from peer, we throw away the error if id points to a non-existent channel
13122-
let _ = self.force_close_channel_with_peer(&msg.channel_id, &counterparty_node_id, Some(&msg.data), true);
13085+
let _ = self.force_close_channel_with_peer(&msg.channel_id, &counterparty_node_id, reason);
1312313086
}
1312413087
}
1312513088

@@ -16483,9 +16446,9 @@ mod tests {
1648316446

1648416447
let chan = create_announced_chan_between_nodes(&nodes, 0, 1);
1648516448

16486-
nodes[0].node.force_close_channel_with_peer(&chan.2, &nodes[1].node.get_our_node_id(), None, true).unwrap();
16487-
check_added_monitors!(nodes[0], 1);
1648816449
let message = "Channel force-closed".to_owned();
16450+
nodes[0].node.force_close_broadcasting_latest_txn(&chan.2, &nodes[1].node.get_our_node_id(), message.clone()).unwrap();
16451+
check_added_monitors!(nodes[0], 1);
1648916452
let reason = ClosureReason::HolderForceClosed { broadcasted_latest_txn: Some(true), message };
1649016453
check_closed_event!(nodes[0], 1, reason, [nodes[1].node.get_our_node_id()], 100000);
1649116454

@@ -16689,8 +16652,6 @@ mod tests {
1668916652

1669016653
check_unkown_peer_error(nodes[0].node.force_close_broadcasting_latest_txn(&channel_id, &unkown_public_key, error_message.to_string()), unkown_public_key);
1669116654

16692-
check_unkown_peer_error(nodes[0].node.force_close_without_broadcasting_txn(&channel_id, &unkown_public_key, error_message.to_string()), unkown_public_key);
16693-
1669416655
check_unkown_peer_error(nodes[0].node.forward_intercepted_htlc(intercept_id, &channel_id, unkown_public_key, 1_000_000), unkown_public_key);
1669516656

1669616657
check_unkown_peer_error(nodes[0].node.update_channel_config(&unkown_public_key, &[channel_id], &ChannelConfig::default()), unkown_public_key);
@@ -16721,8 +16682,6 @@ mod tests {
1672116682

1672216683
check_channel_unavailable_error(nodes[0].node.force_close_broadcasting_latest_txn(&channel_id, &counterparty_node_id, error_message.to_string()), channel_id, counterparty_node_id);
1672316684

16724-
check_channel_unavailable_error(nodes[0].node.force_close_without_broadcasting_txn(&channel_id, &counterparty_node_id, error_message.to_string()), channel_id, counterparty_node_id);
16725-
1672616685
check_channel_unavailable_error(nodes[0].node.forward_intercepted_htlc(InterceptId([0; 32]), &channel_id, counterparty_node_id, 1_000_000), channel_id, counterparty_node_id);
1672716686

1672816687
check_channel_unavailable_error(nodes[0].node.update_channel_config(&counterparty_node_id, &[channel_id], &ChannelConfig::default()), channel_id, counterparty_node_id);

0 commit comments

Comments
 (0)