Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: implement dry-run mode for signer #5680

Merged
merged 8 commits into from
Jan 30, 2025
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,15 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to the versioning scheme outlined in the [README.md](README.md).

## [Unreleased]

### Added
- Add `dry_run` configuration option to `stacks-signer` config toml. Dry run mode will
run the signer binary as if it were a registered signer. Instead of broadcasting
`StackerDB` messages, it logs `INFO` messages. Other interactions with the `stacks-node`
behave normally (e.g., submitting validation requests, submitting finished blocks). A
dry run signer will error out if the supplied key is actually a registered signer.

## [3.1.0.0.3]

### Added
Expand All @@ -15,7 +24,7 @@ and this project adheres to the versioning scheme outlined in the [README.md](RE
### Changed

- The RPC endpoint `/v3/block_proposal` no longer will evaluate block proposals more than `block_proposal_max_age_secs` old
- When a transaction is dropped due to replace-by-fee, the `/drop_mempool_tx` event observer payload now includes `new_txid`, which is the transaction that replaced this dropped transaction. When a transaction is dropped for other reasons, `new_txid` is `null`. [#5381](https://github.com/stacks-network/stacks-core/pull/5381)
- When a transaction is dropped due to replauce-by-fee, the `/drop_mempool_tx` event observer payload now includes `new_txid`, which is the transaction that replaced this dropped transaction. When a transaction is dropped for other reasons, `new_txid` is `null`. [#5381](https://github.com/stacks-network/stacks-core/pull/5381)
kantai marked this conversation as resolved.
Show resolved Hide resolved
- Nodes will assume that all PoX anchor blocks exist by default, and stall initial block download indefinitely to await their arrival (#5502)

### Fixed
Expand Down
8 changes: 5 additions & 3 deletions stacks-signer/src/client/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ pub(crate) mod tests {
use stacks_common::util::hash::{Hash160, Sha256Sum};

use super::*;
use crate::config::{GlobalConfig, SignerConfig};
use crate::config::{GlobalConfig, SignerConfig, SignerConfigMode};

pub struct MockServerClient {
pub server: TcpListener,
Expand Down Expand Up @@ -393,8 +393,10 @@ pub(crate) mod tests {
}
SignerConfig {
reward_cycle,
signer_id: 0,
signer_slot_id: SignerSlotID(rand::thread_rng().gen_range(0..num_signers)), // Give a random signer slot id between 0 and num_signers
signer_mode: SignerConfigMode::Normal {
signer_id: 0,
signer_slot_id: SignerSlotID(rand::thread_rng().gen_range(0..num_signers)), // Give a random signer slot id between 0 and num_signers
},
signer_entries: SignerEntries {
signer_addr_to_id,
signer_id_to_pk,
Expand Down
85 changes: 65 additions & 20 deletions stacks-signer/src/client/stackerdb.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,13 @@ use clarity::codec::read_next;
use hashbrown::HashMap;
use libsigner::{MessageSlotID, SignerMessage, SignerSession, StackerDBSession};
use libstackerdb::{StackerDBChunkAckData, StackerDBChunkData};
use slog::{slog_debug, slog_warn};
use slog::{slog_debug, slog_info, slog_warn};
use stacks_common::types::chainstate::StacksPrivateKey;
use stacks_common::{debug, warn};
use stacks_common::util::hash::to_hex;
use stacks_common::{debug, info, warn};

use crate::client::{retry_with_exponential_backoff, ClientError};
use crate::config::SignerConfig;
use crate::config::{SignerConfig, SignerConfigMode};

/// The signer StackerDB slot ID, purposefully wrapped to prevent conflation with SignerID
#[derive(Debug, Clone, PartialEq, Eq, Hash, Copy, PartialOrd, Ord)]
Expand All @@ -36,6 +37,12 @@ impl std::fmt::Display for SignerSlotID {
}
}

#[derive(Debug)]
enum StackerDBMode {
DryRun,
Normal { signer_slot_id: SignerSlotID },
}

/// The StackerDB client for communicating with the .signers contract
#[derive(Debug)]
pub struct StackerDB<M: MessageSlotID + std::cmp::Eq> {
Expand All @@ -46,32 +53,60 @@ pub struct StackerDB<M: MessageSlotID + std::cmp::Eq> {
stacks_private_key: StacksPrivateKey,
/// A map of a message ID to last chunk version for each session
slot_versions: HashMap<M, HashMap<SignerSlotID, u32>>,
/// The signer slot ID -- the index into the signer list for this signer daemon's signing key.
signer_slot_id: SignerSlotID,
/// The running mode of the stackerdb (whether the signer is running in dry-run or
/// normal operation)
mode: StackerDBMode,
/// The reward cycle of the connecting signer
reward_cycle: u64,
}

impl<M: MessageSlotID + 'static> From<&SignerConfig> for StackerDB<M> {
fn from(config: &SignerConfig) -> Self {
let mode = match config.signer_mode {
SignerConfigMode::DryRun => StackerDBMode::DryRun,
SignerConfigMode::Normal {
ref signer_slot_id, ..
} => StackerDBMode::Normal {
signer_slot_id: *signer_slot_id,
},
};

Self::new(
&config.node_host,
config.stacks_private_key,
config.mainnet,
config.reward_cycle,
config.signer_slot_id,
mode,
)
}
}

impl<M: MessageSlotID + 'static> StackerDB<M> {
/// Create a new StackerDB client
pub fn new(
#[cfg(any(test, feature = "testing"))]
/// Create a StackerDB client in normal operation (i.e., not a dry-run signer)
pub fn new_normal(
host: &str,
stacks_private_key: StacksPrivateKey,
is_mainnet: bool,
reward_cycle: u64,
signer_slot_id: SignerSlotID,
) -> Self {
Self::new(
host,
stacks_private_key,
is_mainnet,
reward_cycle,
StackerDBMode::Normal { signer_slot_id },
)
}

/// Create a new StackerDB client
fn new(
host: &str,
stacks_private_key: StacksPrivateKey,
is_mainnet: bool,
reward_cycle: u64,
signer_mode: StackerDBMode,
) -> Self {
let mut signers_message_stackerdb_sessions = HashMap::new();
for msg_id in M::all() {
Expand All @@ -84,7 +119,7 @@ impl<M: MessageSlotID + 'static> StackerDB<M> {
signers_message_stackerdb_sessions,
stacks_private_key,
slot_versions: HashMap::new(),
signer_slot_id,
mode: signer_mode,
reward_cycle,
}
}
Expand All @@ -110,18 +145,33 @@ impl<M: MessageSlotID + 'static> StackerDB<M> {
msg_id: &M,
message_bytes: Vec<u8>,
) -> Result<StackerDBChunkAckData, ClientError> {
let slot_id = self.signer_slot_id;
let StackerDBMode::Normal {
signer_slot_id: slot_id,
} = &self.mode
else {
info!(
"Dry-run signer would have sent a stackerdb message";
"message_id" => ?msg_id,
"message_bytes" => to_hex(&message_bytes)
);
return Ok(StackerDBChunkAckData {
accepted: true,
reason: None,
metadata: None,
code: None,
});
};
loop {
let mut slot_version = if let Some(versions) = self.slot_versions.get_mut(msg_id) {
if let Some(version) = versions.get(&slot_id) {
if let Some(version) = versions.get(slot_id) {
*version
} else {
versions.insert(slot_id, 0);
versions.insert(*slot_id, 0);
1
}
} else {
let mut versions = HashMap::new();
versions.insert(slot_id, 0);
versions.insert(*slot_id, 0);
self.slot_versions.insert(*msg_id, versions);
1
};
Expand All @@ -143,7 +193,7 @@ impl<M: MessageSlotID + 'static> StackerDB<M> {

if let Some(versions) = self.slot_versions.get_mut(msg_id) {
// NOTE: per the above, this is always executed
versions.insert(slot_id, slot_version.saturating_add(1));
versions.insert(*slot_id, slot_version.saturating_add(1));
} else {
return Err(ClientError::NotConnected);
}
Expand All @@ -165,7 +215,7 @@ impl<M: MessageSlotID + 'static> StackerDB<M> {
}
if let Some(versions) = self.slot_versions.get_mut(msg_id) {
// NOTE: per the above, this is always executed
versions.insert(slot_id, slot_version.saturating_add(1));
versions.insert(*slot_id, slot_version.saturating_add(1));
} else {
return Err(ClientError::NotConnected);
}
Expand Down Expand Up @@ -216,11 +266,6 @@ impl<M: MessageSlotID + 'static> StackerDB<M> {
u32::try_from(self.reward_cycle % 2).expect("FATAL: reward cycle % 2 exceeds u32::MAX")
}

/// Retrieve the signer slot ID
pub fn get_signer_slot_id(&self) -> SignerSlotID {
self.signer_slot_id
}

/// Get the session corresponding to the given message ID if it exists
pub fn get_session_mut(&mut self, msg_id: &M) -> Option<&mut StackerDBSession> {
self.signers_message_stackerdb_sessions.get_mut(msg_id)
Expand Down
39 changes: 35 additions & 4 deletions stacks-signer/src/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ const BLOCK_PROPOSAL_TIMEOUT_MS: u64 = 600_000;
const BLOCK_PROPOSAL_VALIDATION_TIMEOUT_MS: u64 = 120_000;
const DEFAULT_FIRST_PROPOSAL_BURN_BLOCK_TIMING_SECS: u64 = 60;
const DEFAULT_TENURE_LAST_BLOCK_PROPOSAL_TIMEOUT_SECS: u64 = 30;
const DEFAULT_DRY_RUN: bool = false;
const TENURE_IDLE_TIMEOUT_SECS: u64 = 120;

#[derive(thiserror::Error, Debug)]
Expand Down Expand Up @@ -106,15 +107,36 @@ impl Network {
}
}

/// Signer config mode (whether dry-run or real)
#[derive(Debug, Clone)]
pub enum SignerConfigMode {
/// Dry run operation: signer is not actually registered, the signer
/// will not submit stackerdb messages, etc.
DryRun,
/// Normal signer operation: if registered, the signer will submit
/// stackerdb messages, etc.
Normal {
/// The signer ID assigned to this signer (may be different from signer_slot_id)
signer_id: u32,
/// The signer stackerdb slot id (may be different from signer_id)
signer_slot_id: SignerSlotID,
},
}

impl std::fmt::Display for SignerConfigMode {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
SignerConfigMode::DryRun => write!(f, "Dry-Run signer"),
SignerConfigMode::Normal { signer_id, .. } => write!(f, "signer #{signer_id}"),
}
}
}

/// The Configuration info needed for an individual signer per reward cycle
#[derive(Debug, Clone)]
pub struct SignerConfig {
/// The reward cycle of the configuration
pub reward_cycle: u64,
/// The signer ID assigned to this signer (may be different from signer_slot_id)
pub signer_id: u32,
/// The signer stackerdb slot id (may be different from signer_id)
pub signer_slot_id: SignerSlotID,
/// The registered signers for this reward cycle
pub signer_entries: SignerEntries,
/// The signer slot ids of all signers registered for this reward cycle
Expand All @@ -141,6 +163,8 @@ pub struct SignerConfig {
pub tenure_idle_timeout: Duration,
/// The maximum age of a block proposal in seconds that will be processed by the signer
pub block_proposal_max_age_secs: u64,
/// The running mode for the signer (dry-run or normal)
pub signer_mode: SignerConfigMode,
}

/// The parsed configuration for the signer
Expand Down Expand Up @@ -181,6 +205,8 @@ pub struct GlobalConfig {
pub tenure_idle_timeout: Duration,
/// The maximum age of a block proposal that will be processed by the signer
pub block_proposal_max_age_secs: u64,
/// Is this signer binary going to be running in dry-run mode?
pub dry_run: bool,
}

/// Internal struct for loading up the config file
Expand Down Expand Up @@ -220,6 +246,8 @@ struct RawConfigFile {
pub tenure_idle_timeout_secs: Option<u64>,
/// The maximum age of a block proposal (in secs) that will be processed by the signer.
pub block_proposal_max_age_secs: Option<u64>,
/// Is this signer binary going to be running in dry-run mode?
pub dry_run: Option<bool>,
}

impl RawConfigFile {
Expand Down Expand Up @@ -321,6 +349,8 @@ impl TryFrom<RawConfigFile> for GlobalConfig {
.block_proposal_max_age_secs
.unwrap_or(DEFAULT_BLOCK_PROPOSAL_MAX_AGE_SECS);

let dry_run = raw_data.dry_run.unwrap_or(DEFAULT_DRY_RUN);

Ok(Self {
node_host: raw_data.node_host,
endpoint,
Expand All @@ -338,6 +368,7 @@ impl TryFrom<RawConfigFile> for GlobalConfig {
block_proposal_validation_timeout,
tenure_idle_timeout,
block_proposal_max_age_secs,
dry_run,
})
}
}
Expand Down
Loading
Loading