diff --git a/.cargo/config.toml b/.cargo/config.toml new file mode 100644 index 00000000..02369289 --- /dev/null +++ b/.cargo/config.toml @@ -0,0 +1,5 @@ +[source.crates-io] +replace-with = "vendored-sources" + +[source.vendored-sources] +directory = "vendor" diff --git a/.github/workflows/cargo-build.yml b/.github/workflows/cargo-build.yml index 6d97b5cd..e08035fb 100644 --- a/.github/workflows/cargo-build.yml +++ b/.github/workflows/cargo-build.yml @@ -1,4 +1,4 @@ -name: ubuntu-matrix +name: cargo-build-matrix # Controls when the action will run. on: diff --git a/.gitignore b/.gitignore index d56fce3e..be765b71 100644 --- a/.gitignore +++ b/.gitignore @@ -10,6 +10,7 @@ node_modules # will have compiled files and executables debug/ target/ +vendor/ # Remove Cargo.lock from gitignore if creating an executable, leave it for libraries # More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html diff --git a/BQS97/BQS97.pdf b/BQS97/BQS97.pdf new file mode 100644 index 00000000..23ce5d5c Binary files /dev/null and b/BQS97/BQS97.pdf differ diff --git a/BQS97/Byzantine_Quorum_Systems.md b/BQS97/Byzantine_Quorum_Systems.md new file mode 100644 index 00000000..1e447353 --- /dev/null +++ b/BQS97/Byzantine_Quorum_Systems.md @@ -0,0 +1,447 @@ + + +##
Byzantine Quorum Systems
+####
Dahlia Malkhi Michael Reiter
+ +######
AT&T Labs—Research, Murray Hill, NJ USA
{dalia,reiter}@research.att.com
+ +### Abstract + +Quorum Bystems are well-known tools for ensuring the con-
+sistency and availability of replicated data despite the be-
nign failure of data repositones. In this paper we consider the arbitrary (Byzantine) failure of data repositories and present the first study of quorum system requirements and construct ions that ensure data availabNit y and consistence y despite these failures. We also consider the load associated with our quorum systems, i.e., the minimal access probabil-
ity of the busiest server. For services subject to arbitrary failures, we demonstrate quorum systems over n servers with a load of `O(1/(sqrt(n))`, thus meeting the lower bound on load for +benignly fault-tolerant quorum systems. We explore several variations of our quorum systems and extend our construc- tions to cope with arbitrary client failures. + +#### 1. Introduction + +A well known way to enhance the availability and performance of a replicated service is by using quorums. A quorum system for a universe of servers is a collection of subsets of servers, each pair of which intersect. Intuitively, each quorum can operate on behalf of the system, thus increasing its availability and performance, while the intersection property guarantees that operations done on distinct quorums preserve consistency. + +    In this paper we consider the arbitrary (Byzantine) failure of clients and servers, and initiate the study of quorum systems in this model. Intuitively, a quomm system tolerant of Byzantine failures is a collection of subsets of servers, each pair of which intersect in a set containing sufficiently many correct servers to guarantee consistencey of the replicated data as seen by clients. We provide the following contributions. + +`1.` We define the class of manking quorum systems, with which data can be consistently replicated in a way that is resiiient to the arbhmy fsilure of data repositories. We present several example constructions of such systems and show necessary and sufficient conditions for the existence of masking quorum systems under different failure assumptions. + +`2.` We explore two variations of masking quorum systems. The first, called dimernination quorum syatema, is suited for services that receive and distribute sel~-ueri.fyirtg information from correct clients (e.g., digitally signed values) that faulty servers can fail to redistribute but cannot undetectable alter. The second variation, called opaque masking quorum systems, is similar to regular masking quorums in that it makes no assumption of self verifying data, but it differs in that clients do not need to know the failure scenarios for which the service was designed. This somewhat simplifies the client protocol and, in the case that the failures are maliciously induced, reveals less information to clients that could guide an attack attempting to compromise the system. + +`3.` We explore the load of each type of quorum system, where the load of a quorum system is the minimal access prob- ability of the busiest server, minimizing over all strate- gies for picking quorums. We present a masking quorum system with the property that its load over a total of n Servem is 0(~), thereby meeting the lower bound for the load of benignly-fault-tolerant quorum systems. For opaque masking quorum systems, we prove a lower bound of ~ on the load, and present a construction that meets this lower bound and proves it tight. + +     For services that use masking quorums (opaque or not), we show how to deal with faulty clients in addition to faulty servers. The primary challenge raised by client fail- ures is that there is no guarantee that clients will update quorums according to any specified protocol. Thus, a faulty client could leave the service in an inconsistent and irrecoverable state. We develop an update protocol, by which clients access the replicated service, that prevents clients from leaving the service in an inconsistent state. The protocol has the desirable property that it involves only the quorum at which an access is attempted, while providing system-wide consistency properties. +569 + +`4.` In our treatment, we express assumptions about possi- ble failures in the system in the form of a fail-prone system B={ BI,..., Bh ) of servers, such that some Bi cent sins all the faulty servers. This formulation ineludes typical fail- ure assumptions that at most a threshold ~ of servers fail (e.g., the sets B1,..., Bk could be ~ sets of ~ servers), but it also generalizes to allow less uniform failure scenar- ios. Our motivation for exploring this generalization stems from our experience in constructing secure distributed ser- vices [34, 27], i.e., distributed services that can tolerate the malicious corruption of some (typically, up to a threshold number of) component servers by an attacker. A criticism to assuming a simple threshold of corrupted servers is that server penetrations may not be independent. For exam- ple, servers in physical proximity to each other or in the same administrative domain may exhibit correlated proba- bilities of being captured, or servers with identical hardware and software platforms may have correlated probabilities of electronic penetration. By exploiting such correlations (i.e., knowledge of the collection B), we can design quorum sys- tems that more effectively mask faulty servers. +Our quorum systems, if used in conjunction with ap- propriate protocols and synchronization mechanisms, can be used to implement a wide range of data semantics. In this paper, however, we choose to demonstrate a variable supporting read and write operations with relatively weak semantics, in order to maintain focus on our quorum con- structions. These semantics imply a sate variable [24] in the caae of a single reader and single writer, which a set of correct clients can use to build other abstractions, e.g., atomic, multi-writ er multi-reader registem [24, 21, 25], con- current timestamp systems [12, 19], l-exclusion [11, 2], and atomit snapshot scars [1, 5]. Our quorum constructions can also be directly exploited in algorithms that employ “uni- form” quorums for fault tolerance (by involving a threshold of processes), in order to improve efficiency or tolerate non- uniform failure scenarios. Examples include algorithms for shared memory emulation [6], randomized Byzantine agree- ment [39], reliable Byzantine multicast [8, 33, 27], and secure replicated data [18]. + +The rest of this paper is structured as follows. We begin in Section 2 with a description of related work. + +In Section 3 we present our system model and definitions. We present quorum systems for the replication of arbitrary data subject to arbltrarv server failures in + +Section 4. and in Section 5 “we present two variations of these systems. We then detail an access protocol for replicated services that tolerate faulty clients in addition to faulty servers in Section 6. We conclude in Section 7. + +### 2 Related work + +Our work was influenced by the substantial body of literature on quorum systems for benign failures and applications that make use of them, e.g., [15, 38, 26, 14, 17, 13, 9, 4, 30]. In particular, our grid construction of Section 4 was influ- enced by grid-like constructions for benign failures (e.g., [9]), and we borrow our definition of load from [30]. +Quorum systems have been previously employed in the implementation of security mechanisms. Naor and Wool [31] described methods to construct an access-control service us- ing quorums. Their constructions use cryptographic tech- niques to ensure that out-of-date (but correct) servers can- not grant access to unauthorized users. Agrawal and El Abbadi [3] and Mukkamala [29] considered the confiden- tiality of replicated data despite the disclosure of the con- tents of a threshold of the (otherwise correct) repositories. Their constructions used quorums with increased intersec- tion, combined with Rabin’s dispersal scheme [32], to en- hance the confidentiality and availability of the data despite some servers crashing or their contents being observed. Our work differs from all of the above by considering arbitrarily faulty servers, and accommodating failure scenarios beyond a simpIe threshold of servers. + +Herlihy and Tygar [18] applied quorums with increased intersection to the problem of protecting the confidential- ity and integrity of replicated data against a threshold of arbitrarily faulty servers. In their constructions, replicated data is stored encrypted under a key that is shared among the servers using a threshold secret-sharing scheme [36], and each client accesses a threshold number of servers to recon- struct the key prior to performing (encrypted) reads and writes. This construction exhibits one approach to make replicated data self-verifying via encryption, and thus the quorum system they develop is a special case of our dis- semination quorum systems, i.e., for a threshold of faulty servers. +570 + +### 3 Preliminaries + +### 3.1 Preliminaries System model + +We assume a universe .?Jof servers, IIYI= n, and an arbi- trary number of clients that are distinct from the servera. A quorum system ~ ~ 2U is a set of subsets of U, any pair of which ht ersect. Each Q c Q is called a quorum. + +Servers (and clients) that obey their specifications are correct. A faulty server, however, may deviate from its spec- ification arbitrarily. A fail-prone system B c 2U is a set of subsets of U, none of which is contained in another, such that some B E B contains all the faulty servers. The fail- prone system represents an assumption characterizing the failure scenarios that can occur, and could express typical assumptions that up to a threshold of servers fail, as well as less uniform assumptions. + +In the remainder of this section, and throughout Sec- tions 4 and 5, we assume that clients behave correctly. In Section 6 we will relax this assumption (and will be explicit when we do so). + +We assume that any two correct processes (clients or servers) can communicate over an authenticated, reliable channel. That is, a correct process receives a message from another correct process if and only if the other correct pro- cess sent it. However, we do not assume known bounds on message transmission times; i.e., communication is asyn- chronous. + +### 3.2 Access protocol + +We consider a problem in which the clients perform read and write operations on a variable z that is replicated at each server in the universe U. A copy of the variable z is stored at each server, along with a timestamp value t.Timestamps are assigned by a client to each replica of the variable when the client writes the replica. Our protocols require that differ- ent clients choose different timestamps, and thus each client c chooses its timestamps from some set T. that does not intersect T,, for any other client c’. The timestamps in T. can be formed, e.g., as integers appended with the name of c in the low-order bits. The read and write operations are implement ed as follows. +Write: For a client c to write the value u, it queries each server in some quorum Q to obtain a set of value/timestamp pairs A = {}ti~Q; chooses a timestamp t E Tc greater than the highest timestamp value in A end greater than any timestamp it has chosen in the past; and updates z and the associated timestamp at each server in Q to v and t, respectively. + +Read: For a client to read x, it queries each server in some quorum Q to obtain a set of value/ timestamp pairs A = {}U~Q. The client then applies a deterministic function Restdto to A to obtain the result Result(A) of the read operation. +In the case of a write operation, each server updates its local variable and timestamp to the received values only if t is greater than the timestamp currently associated with the variable. +Two points about this description deserve further dis- +cussion. First, the nature of the quorum sets Q and the function Result () are intentionally left unspecified; further clarification of these are the point of this paper. Second, this descrbtion is intended to reauire a client to obtain a set A containing value/timestamp pairs from every server in some quorum Q. That is, if a client is unable to gather a complete set A for a quorum, e.g., because some server in the quorum appears unresponsive, the client must try to perform the operation with a different quorum. This re- quirement stems from our lack of synchrony assumptions on the network: in general, the only way that a client can know that it has accessed every correct server in a quorum is to (apparently successfully) access every server in the quorum. Our framework guarantees the availability of a quorum at any moment, and thus by attempting the operation at mul- tiple quorums, a client can eventually make progress. In some cases, the client can achieve progress by incrementally accessing servers until it obtains responses from a quorum of them. + +In Sections 4 and 5, we will argue the correctness of the above protocol—instantiated with quorums and a Resul to function that we will define according to the following semantics; a more formal treatment of these concepts can be found in [24]. We say that a read operation begins when the client initiates the operation and ends when the client obtains the read value; an operation to write value `u` with timestamp tbegins when the client initiates it and ends when all correct servers in some quorum have received the update `v, t`. An operation OPI precedes an operation o~ if opI ends before opa begins (in real time). If opI does not precede opa and opz does not precede opl, then they are called concurrent. Given a set of operations, a aertalization of those operations is a total ordering on them that extends the precedence ordering among them. Then, for the above protocol to be correct, we require that any read that is con- current with no writes returns the last value written in some serialization of the preceding writes. In the case of a single reader, single writer variable, this will immediately imply safe semantics [24]. + +### 3.3 Load + +A measure of the inherent performance of a quorum system is its ioad. Naor and Wool [30] define the load of a quorum system as the probability of accessing the busiest server in the best case. More precisely, given a quorum system Q, an access strategu w is a probability distribution on the element 5 of Q; i.e.; ‘~QEQ w(Q) = 1: w(Q) is the probability +that quorum Q will be chosen when the service is accessed. Load is then defined as follows: +Definition 3.1 Let a strategy w be given for a quorum system Q={ Ql, ..., Q~} over a universe U. For an element u E U, the load induced by w on u is l~(u) = ~Q~3U w(Qi). + +The load induced by a strategy w on a quorum system Q is `L(Q) = rnm{L(u)}`. The system load (or just load) on a quorum system Q is `L(Q) = m~{LW(Q)}`, where the minimum is taken over all strategies. ❑ We reiterate that the load is a best case definition. The load of the quorum system will be achieved only if an op- timal access strategy is used, and only in the case that no failures occur. A strength of this deilnition is that load is a property of a quorum system, and not of the protocol using it. A comparison of the definition of load to other seemingly plausible definitions is given in [30]. + +### 4 Masking quorum systems +571 + +In this section we introduce masking quorum systems, +can be used to mask the arbitrarily faulty behavior of data repositories. To motivate our definition, suppose that the replicated variable z is written with quorum QI, and that subsequently z is read using quorum Qz. + +If 13 is the set of arbitrarily faulty servers, then (QI n Qz ) \ 1? is the set of correct servers that possess the latest value for z. In order for the client to obtain this value, the client must be able to locate a value/timestamp pair returned by a set of servers that could not all be faulty. In addition, for availability we require that there be no set of faulty servers that can disable all quorums. + +`4.1` A quorum system Q is a masking quorum aystern for a fail-prone system B if the following properties are satisfied. +Definition +M2, vBET33QEQ: +— +BnQ=O +It is not difficult to verify that a masking quorum sys- tem enables a client to obtain the correct answer from the service. The write operation is implemented as described in Section 3, and the read operation becomes: +Read: For a client to read a variable z, it queries each server +in some quorum Q to obtain a set of value/timestamp +A = {}@Q. The client computes the set A’ = { : 3B+c _ Q[ VBEZ3[B+~B]A +pairs +Vu EB+[vu=v Atti=t] ]}. +The client then chooses the pair in A’ with the highest timestamp, and chooses v as the result of the read operation; if A’ is empty, the client returns 1 (a null value). +Lemma 4.2 A read operation that is concurrent with no write operations returns the value written by the last pre- ceding write operation in some serialization of all preceding write operations. +which + + `Proof`. Let W denote the set of write operations preceding the read. The read operation will return the value written in the write operation in W with the highest timestamp, since, by the construction of masking quorum systems, this value/timestamp pair will appear in A’ and will have the highest timestamp in A’ (any pair with a higher timestamp will be returned only by servers in some B c B). So, it suf- fices to argue that there is a serialization of the writes in W in which this write operation appears last, or in other words, that this write operation precedes no other write operation in W. This is immediate, however, as if it did precede an- other write operation in W, that write operation would have a higher timest amp. ❑ +This lemma implies that the protocol above implements a single-writer single-reader safe variable [24]. From these, multi- writ er multi-reader at omit variables can be built using well-known constructions [24, 21, 25]. +A necessary and sufficient condition for the existence of a masking quorum system (and a construction for one, if it exists) for any given fail-prone system B is given in the following theorem: +Theorem 4.3 Let B be a fail-prone system for a universe U. Then there exists a masking quomm system for B iff 4? = {U\ B : B ~ f3} is a masking quorum system for B. +Proof. Obviously, if Q is a masking quorum system for B, then one exists. To show the converse, assume that Q is not a masking quorum. Since M2 holds in Q by construction, there exist QI, QZ E Q and B’, B“ ●B, such that (Ql n Qz)\B’ c B”. Let Bl=U\Qland BZ=U\QZ. By the construction of Q, we know that BI, Bz E f?. By M2, any masking quorum system for B must contain quorums Q; ~ Q,, Q~ G Qz. However, for any such Q~, Q:, it is the case that (Q{ nQ:)\B’ ~ (Ql nQa)\B’ g B“, violating Ml. Therefore, there does not exist a masking quorum system for B under the assumption that Q is not a masking quorum system for B. ❑ +Corollary 4.4 Let B be a fail-prone system for a universe U. Then there exists a masking quorum system for B iff for d B1,BZ,BS,B4 ~ B,U ~ B1UBZUB3UB4. hp=ticd~, suppose that B = {B G U : IBI = j}. Then, there exists a masking quorum system for B iff n > 4f. +Proof. By Theorem 4.3, there is a masking quorum for B iff Q = {U \ B : B E B} is a masking quorum for B. By construction, Q is a masking quorum iff M 1 holds for Q, i.e., +Proof. Let w be any strategy for the quorum system Q, and fix Q1 E Q such that [QI I = C(Q). Summing the loads induced by w on all the elements of Q1 we obtain: +Q; +=1 +Therefore, there exists some element in QI that stiers a +10a‘dfalteMat“ +Similarly, summing the total load induced by w on all of +the elements of the universe, we get: +ifffOr& B1,B1,BS,B4 +EB: +Bl)~(u\B2))\&g B4 +B2)~&ui?4 +B2u B3u B4. +(Here, the inequality results from the minimality of c(Q).) Therefore, there exists some element in U that sfiere a load +of at least *. ❑ +Since any masking quorum system is a quorum system, we +have, a fortiorfi +Corollary 4.6 If Q is a masking quorum system over a +@} ~d universe of n elements, then L(Q) > max{ ~, ~ +thus L(Q) > ~. +Below we give several examples of masking quorum sys- tems and describe their properties. +Example 4.7 (Threshold) Suppose that B = {B ~ U : +IBI = f}, n > 4f. Note that this corresponds to the usual +threshold assumption that up to ~ servers may fail. Then, +the quorum system Q = {Q ~ U : IQI = [~1} is +a masking quorum system for B. M1 is satisfied because +~Y Q1~QZ E Q wi~ intersect in at l==t zf + 1 dernents. M2 holds because [~ 1 ~ n – ~. A strategy that as- +signs equal probability to each quorum induces a load of +1 on the system. By Corollary 4.6, this load is in fact the load of the system. ❑ +The following example is interesting since its load de- creases as a function of n, and since it demonstrates a method for ensuring system-wide consistency in the face of Byzan- tine failures while requiring the involvement of fewer than a majority of the correct servers. +Example 4.8 (Grid quorums) Suppose that the universe of servers is of size n = kz for some integer k and that B = {B ~ U : IBI = f}, 3j+ 1< X. Arrange the universe into a W x W grid, as shown in F@re 1. Denote the rows +E! +((u\ - u\(&u +* ugB1u +The following theorem was proved in [30] for benign- failure quorum systems, and holds for masking quorums as well (as a result of Ml). Let c(Q) denote the size of the smallest quorum of Q. +`Theorem 4.5` If Q is a quorum system over a universe of n &l}. elements, then L(Q) ~ max{ -&j, . +The proof of this theorem in [30] employs rather complex methods. Here we present a simpler proof of their theorem. +572 +:[+ + + and columns of the grid by R; and C;, respectively, where + +##### 5 Variations + +##### 5.1 Dissemination quorum systems + +As a special case of services that can employ quorums in a Byzantine environment, we now consider applications in which the service is a repository for self verifying information, i.e., information that only clients can create and to which clients can detect any attempted modification by a faulty server. A natural example is a database of public key certificates as found in many public key distribution systems (e.g., [10, 37, 23]). A public key certificate is a structure con- taining a name for a user and a public key, and represents the assertion that the indicated public key can be used to au- thenticate messages from the indicated user. This structure is digitally signed (e.g., [35]) by a certification authority so that anyone with the public key of this authority can verify this assertion and, providing it trusts the authority, use the indicated public key to authenticate the indicated user. Due to this signature, it is not possible for a faulty server to undetectable modify a certificate it stores. However, a faulty server can undetectable suppress a change from propagating to clients, simply by ignoring an update from a certification authority. This could have the effect, e.g., of suppressing the revocation of a key that has been compromised. + +As can be expected, the use of digital signatures to verify data improves the cost of accessing replicated data. To support such a service, we employ a dissemination quorum system, which has weaker requirements than masking quo- rums, but which nevertheless ensures that in applications like those above, self verifying writes will be propagated to all subsequent read operations despite the arbitrary failure of some servers. To achieve this, it suffices for the inter- section of every two quorums to not be contained in any set of potentially faulty servers (so that a written value can propagate to a read). And, supposing that operations are required to continue in the face of failures, there should be quorums that a faulty set cannot disable. + +Definition 5.1 A quorum system Q is a dissemination + +``` +guo- +1< z < X. Q= +{ +Then, the quorum system +cjuu +Ri:~,{j}C{l... fi},l~l=2f+ 11 +icI +``` + +is a masking quorum system for B. M 1 holds since every pair of quorums intersect in at least 2f + 1 elements (the column of one quorum intersects the 2~ + 1 rows of the other), and M2 holds since for any choice of ~ faulty elements in the grid, 2f + 1 full rows and a column remain available. A strategy that assigns equal probabdit y to each quorum induces a load of (2f+2)fi-(2f+l +~, and again by Corollary 4.6, this is the load of the”system. ❑ +Note that by choosing B = {O} (i.e., f = O) in the exam- ple above, the resulting construction has a load of 0( $=), +which asymptotically meets the bounds given in Corollary 4.6. In general, however, this construction yields a load of 0(~), which is not optimal: Malkhi et al. [28] show a lower +bound of ~ ~ on the load of any masking quorum sYs- r +tern for B = {B ~ U : lB/ = f}, and provide a construction whose load matches that bound. +Figure 1: Grid construction, k.x k = n, ~ = 1 (one quorum shaded). +Example 4.9 (Partition) Suppose that B = {131,..., B-}, +m > 4, is a partition of U where Bi # 0 for all i, 1 < i < m. This choice of B could arise, for example, in a wide area network composed of multiple local clusters, each containing some Bi, and expresses the assumption that at any time, at most one cluster is faulty. Then, any collection of nonempty sets B1 ~ E1, 1 ~ i < m, can be thought of as ‘super-elements’ in a universe of ~lze m, with a threshold assumption ~ = 1. Therefore, the following is a masking quorum system for B: +rurn sys tern for a fail-prone erties are satisfied. +Dl:vQ,, QzEQVBEB: +D2:VBEB3QEQ: +❑ +system B if the following prop- QlnQ~qB +Q= uEi { i~I +: IC{l,..., +m}l,Il=(~l 1 + +A dissemination quorum system will suffice for propagat- ing self-verifying information as in the application described above. The write operation is implemented as described in Section 3, and the read operation becomes: + +Read: For a client to read a variable z, it queries each server in some quorum Q to obtain a set of value/timestamp pairs A = {}uEQ. The client then discards those pairs that are not verifiable (e.g., using an appropriate e digital signature verification algorithm) and chooses from the re- maining pairs the pair with the largest timestamp. u is the result of the read operation. + +It is important to note that timestamps must be included as part of the self-verifying information, so they cannot be undet ect ably altered by faulty servers. In the case of the ap- plication described above, existing standards for public key certificates (e.g., [10]) already require a real-time timestamp in the certificate. + +Ml is satisfied because the intersection of any two quorums contains elements from at least three sets in B. M2 holds since thereis no B E B thatintersects all quorums. A strat- egy that assigns equal probability to each quorum induces a load of ~ ~WI on the system regardless of the size of each +~i, and again Corollary 4.6 implies that this is the load of the system. + +If m = k2 for some k. then a more efficient construction can be achieved by forming the grid construction from Ex- ample 4.8 on the ‘super elements’ {~i }, achieving a load of *.m ❑ +573 +BnQ=O + + The following lemma proves correctness of the above pro- tocol using dissemination quorum systems. The proof is al- most identical to that for masking quorum systems. + +Lemma 5.2 A read operation that is concurrent with no write operations returns the value written by the last pre- ceding write operation in some serialization of all preceding write operations. +Due to the assumption of self-verifying data, we can also prove in this case the following property. +Lemma 5.3 A read operation that is concurrent with one or more write operations returns either the value written by the last preceding write operation in some serialization of all preceding write operations, or any of the values being written in the concurrent write operations. + +The above lemmata imply that the protocol above im- plements a single-writer single-reader regular variable [24]. Theorems analogous to the ones given for masking quorum systems above are easily derived for disseminateion quorums. Below, we list these results without proof. + +`Theorem 5.4` Let B be a fail-prone system for a universe U. Then there exists a dissemination quorum system for B iff Q = {U \ 13 : B E B} is a dissemination quorum system for B. + +`5.2` Opaque masking quorum systems +Masking quorums impose a requirement that clients know the fail-prone system t?, while there may be reasons that clients should not be required to know this. First, it some- what complicates the client’s read protocol. Second, by re- vealing the failure scenarios for which the system was de- signed, the system also reveals the failure scenarios to which it is vulnerable, which could be exploited by an attacker to guide an active attack against the system. By not reveal- ing the fail-prone system to clients, and indeed giving each client only a small fraction of the possible quorums, the sys- tem can somewhat obscure (though perhaps not secure in any formal sense) the failure scenarios to which it is vulner- able, especially in the absence of client collusion. +In this section we describe one way to modify the mask- ing quorum definition of Section 4 to be opaque, i.e., to elim- inate the need for clients to know 1?. In the absence of the client knowing B, the only method of which we are aware for the client to reduce a set of replies from servers to a single reply from the service is via voting, i.e., choosing the reply that occurs most often. In order for this reply to be the cor- rect one, however, we must strengthen the requirements on our quorum systems. Specifically, suppose that the variable z is written with quorum Q1, and that subsequently x is read with quorum Q2. If 1? is the set of arbitrarily faulty servers, then (Ql n Q2 ) \ B is the set of correct servers that possess the latest value for z (see Figure 2). In order for the client to obtain this value by vote, this set must be larger than the set of faulty servers that are allowed to respond, i.e., Q2 n B. Moreover, since these faulty servers can “team up” with the out-of-date but correct servers in an effort to suppress the write operation, the number of correct, up-to-date servers that reply must be no less than the number of faulty or out-of-date servers that can reply, i.e., (Qz fl B) U (Qz \ Q1 ). + +`Definition 5.10` A quorum system Q is an opaque masking quorum system for a fail-prone system B if the following properties are satisfied. +01: VQI, Q2EQVBEZ3: +l(Qln QZ)\Bl z l(Qzn B) U(Qz\Ql)l +Corollary +U. Then there exists a dissemination quorum system for B iff for all B1,B2,Bs E B, U ~ B1 Ul?Z uB3. Inparticukm, suppose that B = {B ~ U :IBI = f}. Then, there exists a dissemination quorum system for B iff n > 3f. +Corollary 5.6 If Q is a dissemination quorum system over +Q}, a universe of n elements, then L(Q) z max{ ~, . +and thus also L(Q) ~ -&. +Below, we provide several example constructions of dissem- inateion quorum systems. +Example 5.7 (Threshold) Suppose that B = {B Q U : IBl = f}, n > 3f. Note that this corresponds to the usual threshold assumption that up to f servers may fail. Then, the quorum system Q = {Q G U : IQI = [~1} is a +dissemination quorum system for B with load ~ [-l. •l +Example 5.I3 ( Grid) Let the universe be arranged in a grid as in Example 4.8 above, and let B = {B G U : IBI = f}, 2j + I < W. Then, the quorum system +5.5 Let B be a fail-prone system for a universe +Q= cjuu Ri:I, {j}~{l...fi},lI[=f+l { ieI +} +B +Q2 +01: +02: + +is a dissemination quorum system for B. The load of this system is “+a)~-(’+l]. ❑ +Example 5.9 (Partition) Suppose that B = {Bl, . . . . B~}, m > ~, is a partition of U. For any collection of nonempt y sets B; ~ B;, 1 < i < m, the Threshold construction of Example 5.7 on the ‘super-elements’ Bi ~ Bi (as in Example 4.9) yields a dissemination quorum system with a load of ~ [*1. If m = k’ for some k, the Grid construction of +Example 5.8 achieves a load of %. ❑ +Note that 01 admits the possibility of equality in size be- tween (Q1 n Qz) \B and (Q2 fl II) U (Qa \ Ql). Equality is +574 +❑ +02: VQ,, Q2EQVBEB: +l(Q~nQz)\Bl>lQ,f7Bl +BnQ=O +03, vB~B3Qc +Q: + + sufficient since, in the case that the faulty servers “t earn up” with the correct but out-of-date servers in QZ, the value re- turned from (QI fI Q2 ) \ El will have a higher timestamp than that returned by (Qz n B) u (Qz \ Q1). Therefore, in the case of a tie, a reader can choose the value with the higher times- tamp. It is intcresting to note that a strong inequality in 01 would permit a correct implementation of a single-reader singer-writer safe variable that does not use timestamps (by taking the majority value in a read operation). + +It is not difficult to verify that an opaque masking quo- rum system enables a client to obtain the correct answer from the service. The write operation is implemented as de- scribed in Section 1, and the read operation becomes: + +Read: For a client to read a variable z, it queries each server in some quorum Q to obtain a set of value/timestamp pairs A = {}.~Q. The client chooses the pair that appears most often in A, and if there are multiple such val- +Example 5.14 (Partition) Suppose that B = {Bl, . . . . Bs~}, k>1, isapartitionofUwhereBi#0foralli,1< i<3k. Choose any collection of sets fii ~ B;, 1< i< 3k, such that l~i I = c for a fixed constant c >0. Then, the Threshold construction of Example 5.12 on the ‘super-elements’ {A;} (as in Example 4.9), with universe size 3k and a threshold assumption ~ = 1, yields an opaque quorum system with load ~. ❑ + +Unlike the case for regular masking quorum systems, an open problem is to find a technique for testing whether, given a fail-prone system 1?, there exists an opaque quorum system for 23(other than an exhaustive search of all subsets of 2~). + +In the constructions in Examples 5.12 and 5.13, the re- sulting quorum systems exhibited loads that at best were constant as a function of n. In the case of masking quorum systems, we were able to exhibit quorum systems whose load decreased as a function of n, namely the grid quorums. A natural question is whether there exists an opaque quorum system for any fail-prone system B that has load that de- creases as a fhnction of n. In this section, we answer this question in the negative: we show a lower bound of ~ on the load for any opaque quorum system construction, regardless of the fail-prone system. +ues, the one with the highest timestamp. result of the read operation. + +The value v is the +Opaque masking quorum systems, combined with the access protocol described previously, provide the same semantics as regular masking quorum systems. The proof is ahnost identical to that for regular masking quorums. + +Lemma 5.11 A read operation that is concurrent with no write operations returns the value written by the last preceding write operation in some serialization of all preceding write operations. + +Below we give several examples of opaque masking quo- rum systems (or just “opaque quorum systems”) and de- scribe their properties. + +`Example 5.12` (Threshold) Suppose that B = {B ~ U : +Theorem at least *. +5.15 The load of any opaque quorum system is +IBI = }} where n z 5j and ~ >0. system Q = {Q ~ U : IQI= [~]} system for B, whose load is ~ [~ +Then, the quorum is an opaque quorum +Proof. 01 implies that for any Q1, Q2 E Q, IQ1 fl QzI > +IQ1\Qzl, ~d th~ IQ1n Qzl z ~. Let w be any strategy for the quorum system Q, and flx any QI E Q. Then, the total load induced by w on the elements of Q1 is: +.IQJ1 2 +Therefore, there must be some server in Q1 that sufTers a load at least ~. ❑ +We now present a generic construction of an opaque quo- rum system for B = {0} and increasingly large universe sizes n, that has a load that tends to ~ as n grows. We give this construction primarily to show that the lower bound of ~ is tight; due to the requirement that B = {0}, this con- struction is not of practical use for coping with Byzantine failures. +Example 5.16 Suppose that the universe of servers is U = +{u,,..., u.} where n = 24 for some .f > 2, and that B = +{0}. Consider the n x n Hadamard matrix H(t), constmcted .. +1. ❑ +The next theorem proves a resilience bound for opaque quo- +rum systems. +Theorem 5,13 Suppose that B = {B ~ U : IBI = f}. There exists an opaque quorum system for B iff n z 5f. +Proof. That n ~ 5f is sufficient is already demonstrated in Example 5.12 above. Now suppose that Q is an opaque quorum system for B. Fm any Q1 E Q SUChthat IQ1[ < n–f (QI exists by 03); note that IQ, I > ~ by 02. Choose B1 ~ Q,, IBII = f, and some Q, E Qsuch that Q, C U\Bl (Qz exists by 03). Then IQ1n Qzl < n – 2~. By 02, +IQ1nQZI2 f, =d therefore there is some B, E B such that Bz G Q1 n Qz. Then +n–3~ z += l(Q~nQl)\Bal +> I(Q1 \Q2)u(Ql = lQl\Qzl+lBzl ~ IB11+IB21 += 2f +lQ~nQ1l– IBz I +Where (1) holds by 01. Therefore, we have n ~ 5f. ❑ +nBz)l (1) +575 +recursively as follows: +H(l) = _l ~ +H(k) = +[ +H(k – 1) H(k – 1) +H(k – 1) –H(k – 1) +[1 +–1 –1 + + H(t) has the property that HAT = nl, where Z is the n x n identity matrix. Using well-known inductive ar- guments [16, Ch. 14], it can be shown that (i) the fist row and column consist entirely of – 1‘s, (ii) the i-th row and i-th column, for each i z 2, has 1’s in ~ positions (and simi- larly for –l’s), and (iii) any two rows (and any two columns) i, j z 2 have identical elements in ~ positions, i.e., 1’s in ~ common positions and —1‘s in ~ common positions. +We treat the rows of H(t) as indicators of subsets of U. That is, let Qi = {uj : H(t)[i, j] = 1} be the set defined bythei-throw,1}.cQ; chooses a timestamp t E T=greater than the highest timestamp value in A and greater than any timestamp it has chosen in the paat; and performs Init(Q, v, t). +Note that writing the pair to the quorum Q is performed by executing the operation lnit(Q, u, t).Servers ex- ecute corresponding events ~eliver(u, t).If a correct server executes Deli ver(u, t),and if t is greater than the timest amp currently stored with the variable, then the server updates +the value of the variable and its timestamp to v and t, re- spectively. Regardless of whether it updates the variable, it sends an acknowledgment message to c where Tc 3 t. +The correctness of this protocol depends on the following relationships among hit executions at clients and Deliver events at servers. How to implement kit and DeJiver to satisfy these relationships is the tepic of Section 6.2. +Integrity: If c is correct, then a correct server executes Deliver(u, t)where t ~ T. only if c executed lnit(Q, u,t)for some Q E Q. +Agreement: If a correct server executes Deliver(u, t)and a correct server executes Deliver(u’, t), then w = u’. + +`Propagation:` If a correct server executes Deliver(u, t), then eventually there exists a quorum Q E Q such that every correct server in Q executes Deliver(u, t). +Validity: If a correct client executes Init(Q, v,t)and all servers in Q are correct, then eventually a correct server ex- ecutes Deli ver(v, t). + +Note that by Validity, if a correct client executes lnit(Q, v, t) but Q contains a faulty server, then there is no guarantee that Deliver (u,t)will occur at any correct server; i.e., the write operation may have no effect. A correct server ac- knowledges each Deliver (v, t) execution as described above to inform the client that Deliver (o, t) was indeed executed. If the client receives acknowledgments from a set B+ of servers, such that VB ~ B : B+ ~ B, then it is certain that its write will be applied at all correct servers in some quorum Q (by Propagation). If the client receives acknowledgments from no such set B+ of servers, then it must attempt the Init operation again with a different quorum. As before, M2 guarantees the availability of some quomm. + +In order to argue correctness for this protocol, we have to adapt the definition of operation precedence to allow for the behavior of a faulty client. The reason is that it is unclear how to define when an operation by a faulty client begins or ends, as the client can behave outside the specification of LWIYprotocol. We now say that a write operation that writes v with timestamp tE T., where c is faulty, begins when the first correct server executes Deliver(v, t) and ends when all correct servers in some quorum have executed Deli ver(v, t). Write operations by correct clients begin as before and end when all the correct servers in some quomm have delivered the update. We do not define or make use of the duration of a read by a faulty client; reads by faulty clients are not ordered with respect to other operations. Carrying over the remainder of the precedence definition, a proof very similar +to that of Lemma 4.2 suffices to prove the following: +Lemma 6.1 A correct process’ read operation that is con- current with no write operations returns the value written by the preceding write operation with the highest timestamp among all preceding write operations. + +We are not aware of any common definition of variable semantics in the case of possibly faulty clients with whit% to compare Lemma 6.1. However, note that if all the write operations preceding the read are done by correct clients, the highest timestamp value among them will belong to the last write in some serialization of them, and therefore the read will return that value. +576 + +### 6.2 The update operation + +The remaining protocol to describe is the update protocol for masking quorum systems that satisfies Integrity, Agree- ment, Propagation, and Validity. We present such an update protocol in F@re 3. + +1. If a client executes lnit(Q, v, t),then it sends to each member of Q. + +2. If a server receives from a client c, if tE Tc, and if the server has not previously received from c a message where either t’ = t and V’ # v or t’ > t, then the server sends to each member of Q. + +3. If a server receives identical echo messages from every server in Q, then it sends to each member of Q. + +4. If a server receives identical ready messages from a set B+ of servers, such that B+ ~ B for all B E B, then it sends to every member of Q if it has not done so already. + +5. If a server receives identical ready messages from a set Q– of servers, such that for some B ~ B, Q- = Q \B, it executes Deliver(v, t). +F@re 3: An update protocol +Lemma 6.2( Integrity) If c is correct, then a correct server executes Ile]i ver (v, t) where t E Z’c ordy if c executed lnit(Q, u, t) for some Q. + +`Proof.` The first message from a correct server is sent only after it receives from each member of Q. Moreover, a correct member sends where t E Tc only if it receives from c over an authenticated channel, i.e., only if c executed lnit(Q, u, t). ❑ +Lemma 6.3( Agreement) If a correct server executes Deliver(v, t)and a correct server executes Deliver(v’, t), then v = v’. +Proof. As argued in the previous lemma, for a correct server to execute Ileliver(u, t), must have been sent by all servers in Q. Similarly, must have been sent by all servers in Q’. Since every two quorums intersect in (at least) one correct member, and since any correct server sends for at most one value 0, v must be identical to v’. ❑ + +`Lemma 6.4` If Q is a masking quorum system over a universe U with respect to a fail-prone system B, then VQ c QVB,, B,, B,~B, Q~B, uB, uB,. +Proof. Assume otherwise for a contradiction, i.e., that there isa Q6Qand Bl, Bl, B9~f3 such that Q~BIUBIUBs. +By M2, there exists Q’ E Q, Q’ n B, = 0. Then, Q n Q’ c Ba U Bs and thus (Q n Q’) \Ba ~ Bs, contradicting Ml. ❑ + +`Lemma 6.5`( Propagation) If a correct server executes Deliver (v, t), then eventually there exists a quorum Q E Q such that every correct server in Q executes Deli ver(v, t). +Proof According to the protocol, the correct server that executed Deli ver(v, t) received a message from each server in Q- =Q\Bforsome Q~Qand B~B. Since, for some B’ E B, (at least) all the members in Q-\ B’ are correct, every correct member of Q receives from each of the members of B+ = Q– \ B’. Since, VB” E B, Q– \B’ ~ B“ (by Lemma 6.4), the ready messages from B+ cause each correct member of Q to send such a ready message. Consequently, Deliver(v, t) is executed by all of the correct members of Q. ❑ +Lemma 6.6( Validity) If a correct client c executes lnit(Q, v, t) and all servers in Q are correct, then eventually a correct server executes Deli ver(u, t). +Proof. Since both the client and all of the members of Q are correct, will be received and echoed by every member in Q. Consequently, all the servers in Q will send messages to the members of Q, and will eventually execute Deliver (v, t). •I + +### 7 Conclusions + +The literature contains an abundance of protocols that use quorums for accessing replicated data. This approach is ap- pealing for constructing replicated services as it allows for increasing the availability and efficiency of the service while maintaining its consistency. Our work extends this success- ful approach to environments where both the servers and the clients of a service may deviate from their prescribed behavior in arbitrary ways. We introduced a new class of quorum systems, namely masking quorum systems, and devised pro- tocols that use these quorums to enhance the availability of systems prone to Byzantine failures. We also explored two variations of our quorum syst ems, namely dissemination and opaque masking quorums, and for all of these classes of quo- rums we provided various constructions and analyzed the load they impose on the system. +Our work leaves a number of intriguing open challenges and directions for future work. One is to characterize the average performance of our quorum constructions and their load in less-than-ideal scenarios, e.g., when failures occur. Also, in this work we described only quorum systems that are uniform, in the sense that any quorum is possible for both read and write operations. In practice it may be ben- eficial to employ quorum systems with distinguished read quorums and write quorums, with consistency requirements imposed only between pairs consisting of at least one write quorum. Although this does not seem to improve our lower bounds on the overall load that can be achieved, it may al- low greater flexibility in trading between the availability of reads and writes. +Acknowledgments + +We are grateful to Andrew Odlyzko for suggesting the use of Hadamard matrices to construct opaque masking quo- rum systems with an asymptotic load of ~. We also thank Yehuda Afek and Mk.hael Merritt for he1pful discussions, and Rebecca Wright for many helpful comments on earlier versions of this paper. An insightful comment by Rida Bazzi led to a substantial improvement over a previous version of this paper. +577 + + diff --git a/Cargo.lock b/Cargo.lock index 5050e6c9..b61efc16 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -203,7 +203,7 @@ dependencies = [ "futures-lite 2.1.0", "parking", "polling 3.3.1", - "rustix 0.38.26", + "rustix 0.38.27", "slab", "tracing", "windows-sys 0.52.0", @@ -242,7 +242,7 @@ dependencies = [ "cfg-if", "event-listener 3.1.0", "futures-lite 1.13.0", - "rustix 0.38.26", + "rustix 0.38.27", "windows-sys 0.48.0", ] @@ -258,7 +258,7 @@ dependencies = [ "cfg-if", "futures-core", "futures-io", - "rustix 0.38.26", + "rustix 0.38.27", "signal-hook-registry", "slab", "windows-sys 0.48.0", @@ -1933,9 +1933,9 @@ dependencies = [ [[package]] name = "once_cell" -version = "1.18.0" +version = "1.19.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dd8b5dd2ae5ed71462c540258bedcb51965123ad7e7ccf4b9a8cafaa4a63576d" +checksum = "3fdb12b2476b595f9358c5161aa467c2438859caa136dec86c26fdd2efe17b92" [[package]] name = "oorandom" @@ -2128,7 +2128,7 @@ dependencies = [ "cfg-if", "concurrent-queue", "pin-project-lite 0.2.13", - "rustix 0.38.26", + "rustix 0.38.27", "tracing", "windows-sys 0.52.0", ] @@ -2532,9 +2532,9 @@ dependencies = [ [[package]] name = "rustix" -version = "0.38.26" +version = "0.38.27" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9470c4bf8246c8daf25f9598dca807fb6510347b1e1cfa55749113850c79d88a" +checksum = "bfeae074e687625746172d639330f1de242a178bf3189b51e35a7a21573513ac" dependencies = [ "bitflags 2.4.1", "errno", @@ -2545,9 +2545,9 @@ dependencies = [ [[package]] name = "rustls" -version = "0.21.9" +version = "0.21.10" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "629648aced5775d558af50b2b4c7b02983a04b312126d45eeead26e7caa498b9" +checksum = "f9d5a6813c0759e4609cd494e8e725babae6a2ca7b62a5536a13daaec6fcb7ba" dependencies = [ "log", "ring", @@ -2908,7 +2908,7 @@ dependencies = [ "cfg-if", "fastrand 2.0.1", "redox_syscall", - "rustix 0.38.26", + "rustix 0.38.27", "windows-sys 0.48.0", ] @@ -2923,9 +2923,19 @@ dependencies = [ [[package]] name = "test-log" -version = "0.2.13" +version = "0.2.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f66edd6b6cd810743c0c71e1d085e92b01ce6a72782032e3f794c8284fe4bcdd" +checksum = "6159ab4116165c99fc88cce31f99fa2c9dbe08d3691cb38da02fc3b45f357d2b" +dependencies = [ + "test-log-macros", + "tracing-subscriber", +] + +[[package]] +name = "test-log-macros" +version = "0.2.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7ba277e77219e9eea169e8508942db1bf5d8a41ff2db9b20aab5a5aadc9fa25d" dependencies = [ "proc-macro2", "quote", @@ -3159,9 +3169,9 @@ checksum = "859eb650cfee7434994602c3a68b25d77ad9e68c8a6cd491616ef86661382eb3" [[package]] name = "try-lock" -version = "0.2.4" +version = "0.2.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3528ecfd12c466c6f163363caf2d02a71161dd5e1cc6ae7b34207ea2d42d81ed" +checksum = "e421abadd41a4225275504ea4d6566923418b7f05506fbc9c0fe86ba7396114b" [[package]] name = "tungstenite" @@ -3200,9 +3210,9 @@ dependencies = [ [[package]] name = "unicode-bidi" -version = "0.3.13" +version = "0.3.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "92888ba5573ff080736b3648696b70cafad7d250551175acbaa4e0385b3e1460" +checksum = "6f2528f27a9eb2b21e69c95319b30bd0efd85d09c379741b0f78ea1d86be2416" [[package]] name = "unicode-ident" diff --git a/Cargo.toml b/Cargo.toml index 8341382f..f9d69bbd 100755 --- a/Cargo.toml +++ b/Cargo.toml @@ -53,7 +53,7 @@ curve25519-dalek = "4" crypto_secretstream = "0.2" rust-crypto = "0.2.36" git2 = "0.9.1" -argparse = "0.2.2" +argparse = "0.2.2" time = "0.1.42" chrono = "0" nostr = "0.24.0" diff --git a/cargo.mk b/cargo.mk index ddb37ad7..3a88e465 100755 --- a/cargo.mk +++ b/cargo.mk @@ -49,5 +49,18 @@ cargo-doc:### cargo-doc cargo-b-wasm-tokio: @. $(HOME)/.cargo/env && cargo clean && cargo build --target=wasm32-unknown-emscripten --no-default-features #--features wasm-bindgen,tokio +node-examples-nodejs-run-js-node:### node-examples-nodejs-run-js-node +## node-examples-nodejs-run-js-node + node examples-nodejs/run.js node +node-examples-nodejs-run-js-node-6102:### node-examples-nodejs-run-js-node-6102 +## node-examples-nodejs-run-js-node-6102 + node examples-nodejs/run.js node 6102 +node-examples-nodejs-run-js-rust:### node-examples-nodejs-run-js-rust +## node-examples-nodejs-run-js-rust + node examples-nodejs/run.js rust +node-examples-nodejs-run-js-rust-2106:### node-examples-nodejs-run-js-rust-2106 +## node-examples-nodejs-run-js-rust-2106 + node examples-nodejs/run.js rust 2106 + # vim: set noexpandtab: # vim: set setfiletype make diff --git a/examples-nodejs/replicate.js b/examples-nodejs/replicate.js index 6425b199..39fe2992 100644 --- a/examples-nodejs/replicate.js +++ b/examples-nodejs/replicate.js @@ -12,7 +12,7 @@ hypercore.info().then((_info) => { console.log('KEY=' + hypercore.key.toString('hex')) console.log() if (hypercore.writable && !key) { - hypercore.append(['hi\n', 'ola\n', 'hello\n', 'mundo\n']) + hypercore.append(['hi\n', 'ola\n', 'hello\n', 'mundo\n', hypercore.key.toString('hex')]) } }) @@ -58,7 +58,7 @@ function onconnection (opts) { console.log(""); console.log("### Results (Press Ctrl-C to exit)"); console.log(""); - console.log("Replication succeeded if you see '0: hi', '1: ola', '2: hello' and '3: mundo' (not necessarily in that order)") + console.log("Replication succeeded if you see '0: hi', '1: ola', '2: hello', '3: mundo', '4: key', (not necessarily in that order)") console.log(""); for (let i = 0; i < hypercore.length; i++) { hypercore.get(i).then(value => { diff --git a/examples-nodejs/run.js b/examples-nodejs/run.js index c96541fd..5db978e5 100644 --- a/examples-nodejs/run.js +++ b/examples-nodejs/run.js @@ -3,11 +3,11 @@ const p = require('path') const chalk = require('chalk') const split = require('split2') -const PORT = 8000 - const EXAMPLE_NODE = p.join(__dirname, 'replicate.js') const EXAMPLE_RUST = 'replication' const MODE = process.argv[2] +const PORT = process.argv[3] || 8000 + if (!MODE) { usage() } diff --git a/examples/replication.rs b/examples/replication.rs index 542925c1..4f15dba5 100644 --- a/examples/replication.rs +++ b/examples/replication.rs @@ -38,8 +38,11 @@ fn main() { }); task::block_on(async move { + let mut hypercore_store: HypercoreStore = HypercoreStore::new(); + let storage = Storage::new_memory().await.unwrap(); + // Create a hypercore. let hypercore = if let Some(key) = key { let public_key = VerifyingKey::from_bytes(&key).unwrap(); @@ -53,7 +56,7 @@ fn main() { .unwrap() } else { let mut hypercore = HypercoreBuilder::new(storage).build().await.unwrap(); - let batch: &[&[u8]] = &[b"hi\n", b"ola\n", b"hello\n", b"mundo\n"]; + let batch: &[&[u8]] = &[b"hi\n", b"ola\n", b"hello\n", b"mundo\n", b"key\n"]; hypercore.append_batch(batch).await.unwrap(); hypercore }; @@ -388,7 +391,7 @@ where println!(); println!("### Results"); println!(); - println!("Replication succeeded if this prints '0: hi', '1: ola', '2: hello' and '3: mundo':"); + println!("Replication succeeded if this prints '0: hi', '1: ola', '2: hello', '3: mundo', '4: key':"); println!(); for i in 0..new_info.contiguous_length { println!( diff --git a/vendor/hypercore/.cargo-checksum.json b/vendor/hypercore/.cargo-checksum.json new file mode 100644 index 00000000..db9604ec --- /dev/null +++ b/vendor/hypercore/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{"CERTIFICATE":"53a0a460f8eccb279580aa16013c5f98936eba73554d267632f5ea83d8e890b1","CHANGELOG.md":"8a4836a83337941142be6b379a079fe7575dc43c72a4708880c77906d4168f5d","Cargo.lock":"f20c29c9f5177c0a4fdf1130739aba95853588fdcee6689884b4551cb834f7df","Cargo.toml":"b80a9f1e01fb34ae1df211fb22dbe252f789de9b5686dd8e83bb5e225a7179db","LICENSE-APACHE":"40b135370517318ee023f4553b49453ab716f4277ccc7801beb3a44ec481c9fb","LICENSE-MIT":"a06326997c80661a79a99f66af7417f3420640323952c92afae69d5a4b7537ee","README.md":"db19ee440a805365a53ca973e374eb46eb794142290f8da2c90f3a284a873918","benches/disk.rs":"5a6c5d2b5a30a519464dec175c37b4ff1245065323cef5c35b1dca47af607c94","benches/memory.rs":"3f84f98a50017389b539d2f4c2208f4430e35bf5dc6af4ed7d6e49e29980c8ec","examples/disk.rs":"62974fe304dfbb6284c92df1a5cb6375058f1d359f5f9d78a6fd6a300f663b0c","examples/memory.rs":"f4ddc81bed815ed63625a5ca5ce67beaa16fc12f841c324eef316e032d94ef27","examples/replication.rs":"161139858c741ac9277564af52b40b2df44c870061fb3acfa939a9f3618d0e61","src/bitfield/dynamic.rs":"6003a197ce52d0aa605ba17bbae66e695771e9900dd68e984dd17c2208294e4d","src/bitfield/fixed.rs":"2dbaac2d56b56f78af2793a6b7e883951a1bae02944f63ee92fa2faae6e4539e","src/bitfield/mod.rs":"76f1555e6389a73e73f7fdee470ed3d51ebfcebbd1d94d5c5731f8c8683dab4a","src/builder.rs":"0411b6ffc73a1cda42075ce987e6ae3e85666f03c5259c968d42da5427c5a75b","src/common/cache.rs":"ed25e719612092fb943099eb07bc8a787cc7a7df650c1371f5a6dc3d0ef0181c","src/common/error.rs":"ac4f5658d8b874f5affde1fe8c51576d6ad8eaf8208b42999319691208c5c133","src/common/mod.rs":"426f30f05ab8ebb178e064370f457b45ccebd824f5505908b6d729024ca6194e","src/common/node.rs":"08ab19770804f4e5ce61d3595b06062af2cc1068f4176af16ccb52cc548d3ee0","src/common/peer.rs":"aa096218932eda26927f3c27a22a583c0b52a4cd8c54a2abc604469209917ebe","src/common/store.rs":"e0490b59ec68dea1f0519727f6f3a3d993bf3f9b8d5fefdf6fad3e66d549e17c","src/core.rs":"43026192ee19ba3b61dd39bbf263911b889659aa8246fd89d60246c549941e31","src/crypto/hash.rs":"0011d7326968c6072d2d416eb31a83afa271d1256517914914bf2371ceef369c","src/crypto/key_pair.rs":"262939ca4da120491308e7de734fc2e41f3fc4d9b916a90a0b0525db98f0b5a0","src/crypto/manifest.rs":"7f0dca0127bcf0290ebb8c385acf6f3add7a6df9987acf883dfcf1a3066262f4","src/crypto/mod.rs":"d227ef04204ac64d5136dbfc6dc5fbd6833ae40e13412dde8c712138898a7c15","src/data/mod.rs":"af89c77f5b45a48b860935f1ea6761afc25847df4f5a7327d9642d515f761363","src/encoding.rs":"725501e6f36cf4548220d1ab3d564670dacf947a53ac974246f8c4acca32a4c4","src/lib.rs":"3799aad86cd033ce830660fd7329c9be00db076d6bea9e6ac5126847f12b3298","src/oplog/entry.rs":"df13ac9985cecbd4e057c0375beadf5296186352024b5b01aec9e3c1a41f2bd0","src/oplog/header.rs":"d2a2779b7b8ded5627b834ce0de87abc759de80c579b621ee217cb6ebb3cf1ec","src/oplog/mod.rs":"8404515befe41225055daa5b6499d1495316368e799cc62251b88f52cf09e998","src/prelude.rs":"254cc894b75d1181a8b677a904da441cc64fe44c4a0aac050f46c9c69c2bb855","src/storage/mod.rs":"0a84af3d6b76dd4afec03374d1ef556c069c48dbeab0c23f13cbe40ed61199f9","src/tree/merkle_tree.rs":"439807c4f46c484223a800a83716570ec24fd0bdde0d3bfd8f629deb85b28fa1","src/tree/merkle_tree_changeset.rs":"9da5e8cbaea13cdd537b979e7e470b387a8811510966fbe6ee826ef3e64f11e1","src/tree/mod.rs":"787b1877ce05197191aa7381752707b0f7a9e14f276bb8ddc5466b14db1e5ea9","tests/common/mod.rs":"5de5296507c1633e21ee63280da79c21cc6572fa8191cdd5da8d993177722a3b","tests/core.rs":"851d32f8149c140fb260cfe58d838e22f24d971c25f9008448981dfa1f633b82","tests/js/interop.js":"6cfde323aab89db0548898e0a0e8c93921cc9e684cc6f7343fb9df9c96a8ab8a","tests/js/mod.rs":"b741ad5f4bedbd482dd423d7b8d5e1248f11e1f9c8dc312666b92dc07c2e7c67","tests/js/package.json":"3fb483760615ba5e27b80bb340c593c90f6a7e2b95a4b895174a9e786c75c715","tests/js_interop.rs":"d4f44309aeda77286e4a5bd13fb74647d807c2abb9094a09dcef18907bf43158","tests/model.rs":"d1ef5bea900f87bd7cb2e72d4df70a8d92bb6291f6f5e85b36afea397254307c"},"package":"f58caa4fc81bfac2b018eda14b81cadbfc5cdd88fcee81c5b37532144027cde8"} \ No newline at end of file diff --git a/vendor/hypercore/CERTIFICATE b/vendor/hypercore/CERTIFICATE new file mode 100644 index 00000000..8201f992 --- /dev/null +++ b/vendor/hypercore/CERTIFICATE @@ -0,0 +1,37 @@ +Developer Certificate of Origin +Version 1.1 + +Copyright (C) 2004, 2006 The Linux Foundation and its contributors. +1 Letterman Drive +Suite D4700 +San Francisco, CA, 94129 + +Everyone is permitted to copy and distribute verbatim copies of this +license document, but changing it is not allowed. + + +Developer's Certificate of Origin 1.1 + +By making a contribution to this project, I certify that: + +(a) The contribution was created in whole or in part by me and I + have the right to submit it under the open source license + indicated in the file; or + +(b) The contribution is based upon previous work that, to the best + of my knowledge, is covered under an appropriate open source + license and I have the right under that license to submit that + work with modifications, whether created in whole or in part + by me, under the same open source license (unless I am + permitted to submit under a different license), as indicated + in the file; or + +(c) The contribution was provided directly to me by some other + person who certified (a), (b) or (c) and I have not modified + it. + +(d) I understand and agree that this project and the contribution + are public and that a record of the contribution (including all + personal information I submit with it, including my sign-off) is + maintained indefinitely and may be redistributed consistent with + this project or the open source license(s) involved. diff --git a/vendor/hypercore/CHANGELOG.md b/vendor/hypercore/CHANGELOG.md new file mode 100644 index 00000000..5b57c6d3 --- /dev/null +++ b/vendor/hypercore/CHANGELOG.md @@ -0,0 +1,409 @@ +## 2023-10-28, Version v0.12.1 +### Commits +- [[`60d50a5e76`](https://github.com/datrs/hypercore/commit/60d50a5e7638c60047c722b6cfb7c50e29ecd502)] Fix Oplog decoding failing on bitfied update (Timo Tiuraniemi) + +### Stats +```diff + src/oplog/entry.rs | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) +``` + + +## 2023-10-12, Version v0.12.0 +### Commits +- [[`fa7d487758`](https://github.com/datrs/hypercore/commit/fa7d4877582023e310a7129b11ebd55eb877a75f)] Merge pull request #138 from datrs/v10 (Timo Tiuraniemi) + +### Stats +```diff + .github/workflows/ci.yml | 142 ++++ + .gitignore | 2 + + CHANGELOG.md | 31 + + Cargo.toml | 83 +- + README.md | 81 +- + benches/bench.rs | 58 -- + benches/disk.rs | 140 ++++ + benches/memory.rs | 128 +++ + examples/async.rs | 29 - + examples/disk.rs | 88 ++ + examples/iter.rs | 80 -- + examples/main.rs | 29 - + examples/memory.rs | 59 ++ + examples/replication.rs | 116 +++ + src/audit.rs | 20 - + src/bitfield/dynamic.rs | 403 +++++++++ + src/bitfield/fixed.rs | 228 ++++++ + src/bitfield/iterator.rs | 158 ---- + src/bitfield/masks.rs | 108 --- + src/bitfield/mod.rs | 379 +-------- + src/builder.rs | 100 +++ + src/common/cache.rs | 58 ++ + src/common/error.rs | 78 ++ + src/common/mod.rs | 23 + + src/{storage => common}/node.rs | 77 +- + src/common/peer.rs | 117 +++ + src/common/store.rs | 155 ++++ + src/core.rs | 1136 ++++++++++++++++++++++++++ + src/crypto/hash.rs | 227 +++++- + src/crypto/key_pair.rs | 56 +- + src/crypto/manifest.rs | 43 + + src/crypto/merkle.rs | 74 -- + src/crypto/mod.rs | 10 +- + src/crypto/root.rs | 52 -- + src/data/mod.rs | 46 ++ + src/encoding.rs | 370 +++++++++ + src/event.rs | 3 - + src/feed.rs | 676 ---------------- + src/feed_builder.rs | 89 -- + src/lib.rs | 112 ++- + src/oplog/entry.rs | 164 ++++ + src/oplog/header.rs | 325 ++++++++ + src/oplog/mod.rs | 495 ++++++++++++ + src/prelude.rs | 16 +- + src/proof.rs | 30 - + src/replicate/message.rs | 6 - + src/replicate/mod.rs | 5 - + src/replicate/peer.rs | 40 - + src/storage/mod.rs | 578 +++++-------- + src/storage/persist.rs | 19 - + src/tree/merkle_tree.rs | 1616 +++++++++++++++++++++++++++++++++++++ + src/tree/merkle_tree_changeset.rs | 131 +++ + src/tree/mod.rs | 5 + + tests/bitfield.rs | 195 ----- + tests/common/mod.rs | 108 ++- + tests/compat.rs | 178 ---- + tests/core.rs | 79 ++ + tests/feed.rs | 340 -------- + tests/js/interop.js | 128 +++ + tests/js/mod.rs | 50 ++ + tests/js/package.json | 10 + + tests/js_interop.rs | 192 +++++ + tests/model.rs | 175 ++-- + tests/regression.rs | 18 - + tests/storage.rs | 51 -- + 65 files changed, 7558 insertions(+), 3260 deletions(-) +``` + + +## 2020-07-19, Version v0.11.1-beta.10 +### Commits +- [[`084f00dd3c`](https://github.com/datrs/hypercore/commit/084f00dd3cd9d201315e43eef44352317f9f9b8b)] (cargo-release) version 0.11.1-beta.10 (Bruno Tavares) +- [[`99eff3db3c`](https://github.com/datrs/hypercore/commit/99eff3db3c0f70aeda8e31594c9e2c401743e4b9)] Fix travis errors - clippy warnings and fmt (Bruno Tavares) +- [[`d6f2c5522f`](https://github.com/datrs/hypercore/commit/d6f2c5522f62dbc1f4df303bbaa199f621e3ab70)] Merge pull request #121 from khodzha/append_fix (Bruno Tavares) +- [[`57bd16444e`](https://github.com/datrs/hypercore/commit/57bd16444e3c4e5576e51ac7787851a145d371e9)] Avoid calling unwrap or expect inside fn that returns Result (Bruno Tavares) +- [[`de9ebae3ce`](https://github.com/datrs/hypercore/commit/de9ebae3ce4b0a1f0e76ee17b710c70475f5c33f)] Pin ed25519-dalek to a version with compatible signature methods (Bruno Tavares) +- [[`f7676d530a`](https://github.com/datrs/hypercore/commit/f7676d530a3f6d4ef18f3c92989cccac1c40c131)] Fix clippy errors (Bruno Tavares) +- [[`cf251468e9`](https://github.com/datrs/hypercore/commit/cf251468e9194500cb3b900cc0bb3c9b4a8bfa84)] fixed saving feed to disk (Shamir Khodzha) +- [[`2c260b1b51`](https://github.com/datrs/hypercore/commit/2c260b1b51a5e2ea48bf806fefbfc3705e7dcef1)] Update changelog (Bruno Tavares) + +### Stats +```diff + .gitignore | 1 +- + CHANGELOG.md | 24 +++++++++++++- + Cargo.toml | 4 +- + benches/bench.rs | 7 ++-- + examples/main.rs | 23 +++++++++++-- + src/bitfield/mod.rs | 97 +++++++++++++++++++++++++++++++++++++++++++++-------- + src/crypto/merkle.rs | 11 ++++++- + src/feed.rs | 16 +++++++-- + src/feed_builder.rs | 42 +++++++++++++++++++---- + src/storage/mod.rs | 93 ++++++++++++++++++++++++++++++++++++--------------- + tests/bitfield.rs | 18 ++++------ + tests/common/mod.rs | 2 +- + tests/compat.rs | 12 ++++--- + tests/feed.rs | 24 +++++++++---- + 14 files changed, 295 insertions(+), 79 deletions(-) +``` + + +## 2020-07-09, Version v0.11.1-beta.9 +### Commits +- [[`8589bd17a6`](https://github.com/datrs/hypercore/commit/8589bd17a6ed323a3c48844a6ef13d40937899df)] (cargo-release) version 0.11.1-beta.9 (Bruno Tavares) +- [[`2765a010ea`](https://github.com/datrs/hypercore/commit/2765a010ea176190be4aa36c265de1d2f8cb78c0)] Merge pull request #120 from khodzha/path_check (Bruno Tavares) +- [[`8ee485bf62`](https://github.com/datrs/hypercore/commit/8ee485bf62da4ae6d6a57a8a691db448fa87a3b1)] added path is a dir check in Feed::open (Shamir Khodzha) +- [[`62a411ee66`](https://github.com/datrs/hypercore/commit/62a411ee660701927884c5276032fc94dc7bc952)] Merge branch 'dependabot/cargo/bitfield-rle-0.2.0' (Bruno Tavares) +- [[`bac9ba4905`](https://github.com/datrs/hypercore/commit/bac9ba4905b339c3f79408b2f7ac6fe4bfeb8ad8)] Fix cargofmt (Bruno Tavares) +- [[`2a6563b46f`](https://github.com/datrs/hypercore/commit/2a6563b46f7e67efcd3551403ed300e10d822891)] Update bitfield-rle requirement from 0.1.1 to 0.2.0 (dependabot-preview[bot]) +- [[`37d2a9cf24`](https://github.com/datrs/hypercore/commit/37d2a9cf24502988ec3ad2108b9ae37c5c1f82f2)] Merge branch 'fix-mask-note' (Bruno Tavares) +- [[`e53afb8d92`](https://github.com/datrs/hypercore/commit/e53afb8d92da4a8f55f54c3ed6f987a3b4bde1bf)] Merge branch 'master' into fix-mask-note (Bruno Tavares) +- [[`999ff75213`](https://github.com/datrs/hypercore/commit/999ff75213cdf4246c096bfb3c7bb6fefc666860)] Merge branch 'FreddieRidell-document-src-feed-rs' (Bruno Tavares) +- [[`6be4441404`](https://github.com/datrs/hypercore/commit/6be44414046a5cb801f2985d381e932c9c06075b)] Merge branch 'document-src-feed-rs' of git://github.com/FreddieRidell/hypercore into FreddieRidell-document-src-feed-rs (Bruno Tavares) + +### Stats +```diff + Cargo.toml | 4 +-- + src/bitfield/masks.rs | 2 +- + src/crypto/mod.rs | 4 ++- + src/feed.rs | 73 +++++++++++++++++++++++++++++++++++++++++----------- + tests/feed.rs | 30 +++++++++++++++++++++- + 5 files changed, 94 insertions(+), 19 deletions(-) +``` + + +## 2020-03-03, Version 0.11.1-beta.3 +### Commits +- [[`b555606bd6`](https://github.com/datrs/hypercore/commit/b555606bd626ae39f338bd6aef4f8976ff0c055e)] (cargo-release) version 0.11.1-beta.3 (Bruno Tavares) +- [[`aaf265b8b8`](https://github.com/datrs/hypercore/commit/aaf265b8b84ee5ba6b975a5503db262e154c14eb)] Fix requirements on ram crates to compile (Bruno Tavares) +- [[`10448df561`](https://github.com/datrs/hypercore/commit/10448df56163c1f2917d4508f57713d635fa2d24)] Update changelog (Bruno Tavares) + +### Stats +```diff + CHANGELOG.md | 24 ++++++++++++++++++++++++ + Cargo.toml | 6 +++--- + 2 files changed, 27 insertions(+), 3 deletions(-) +``` + + +## 2020-03-03, Version 0.11.1-beta.2 +### Commits +- [[`3dfd5c8c71`](https://github.com/datrs/hypercore/commit/3dfd5c8c716a439131cf7b9a2b360ef737969335)] (cargo-release) version 0.11.1-beta.2 (Bruno Tavares) +- [[`4136866e01`](https://github.com/datrs/hypercore/commit/4136866e01259825944cff099e59ffa4c8df081c)] Merge pull request #96 from bltavares/bitfield-compress (Bruno Tavares) +- [[`d8beadbbfb`](https://github.com/datrs/hypercore/commit/d8beadbbfb0ff7d2d79e52abc14ffb570570b101)] GH Feedback: add comments on the optional fields (Bruno Tavares) +- [[`9c6812d901`](https://github.com/datrs/hypercore/commit/9c6812d901454a383bee9802e0f5828c3224b515)] Use literals for floats (Bruno Tavares) +- [[`356c90e915`](https://github.com/datrs/hypercore/commit/356c90e915a9a5dcc4edb5bf0fa61eda200f6b9b)] Make test with bigger ranges than page size (Bruno Tavares) +- [[`390e13f9b5`](https://github.com/datrs/hypercore/commit/390e13f9b527845f281b24071bbf579f9a6232eb)] WIP: JS has float numbers on math (Bruno Tavares) +- [[`bd333ba68d`](https://github.com/datrs/hypercore/commit/bd333ba68dc50f6e8bc581d39169ae64f6cba9de)] Compress bitfield and expose it to network code (Bruno Tavares) +- [[`0bdbf6207a`](https://github.com/datrs/hypercore/commit/0bdbf6207af26ca3e3516956db7fa3140679e56e)] Bump dalek and rand (Bruno Tavares) +- [[`ac0f3b6a74`](https://github.com/datrs/hypercore/commit/ac0f3b6a743cae1a8c1b51cabfd5a542ef34361b)] Update changelog (Bruno Tavares) + +### Stats +```diff + CHANGELOG.md | 40 ++++++++++++++++++++++++++++++++++++++++ + Cargo.toml | 3 ++- + src/bitfield/mod.rs | 32 ++++++++++++++++++++++++++++++++ + src/feed.rs | 5 +++++ + tests/bitfield.rs | 22 ++++++++++++++++++++++ + tests/model.rs | 7 +------ + 6 files changed, 102 insertions(+), 7 deletions(-) +``` + + +## 2020-03-03, Version 0.11.1-beta.1 +### Commits +- [[`e5f071766c`](https://github.com/datrs/hypercore/commit/e5f071766c8b32c875df4872abe89ebb43700f31)] (cargo-release) version 0.11.1-beta.1 (Bruno Tavares) +- [[`f7af79a3c2`](https://github.com/datrs/hypercore/commit/f7af79a3c271b426d0d6638872b0420a341d025e)] Merge pull request #100 from bltavares/bumps (Bruno Tavares) +- [[`51c35d8f42`](https://github.com/datrs/hypercore/commit/51c35d8f42c42e111f2c207f1901288aaee7e500)] Point deps to crates versions (Bruno Tavares) +- [[`f3b421c6ca`](https://github.com/datrs/hypercore/commit/f3b421c6ca76a0b5c5acb267988d97ba97e8a77a)] Fix clippy: rename func to adhere to conventions (Bruno Tavares) +- [[`ba09c27336`](https://github.com/datrs/hypercore/commit/ba09c2733684f0320a7f99ebfa3ec8aae31334fd)] Fix travis: include checks on benchmarks (Bruno Tavares) +- [[`173bc3fda2`](https://github.com/datrs/hypercore/commit/173bc3fda2f079994a38577030142b97c3143b4f)] Move from usize to u64 (Bruno Tavares) +- [[`0678d06687`](https://github.com/datrs/hypercore/commit/0678d066875b7cef8cde3628f7ef91658a40f8c1)] Fix changes on ed25519_dalek and rand (Bruno Tavares) +- [[`7fd467d928`](https://github.com/datrs/hypercore/commit/7fd467d92800e00cff7600fe6e68fbb474c899be)] Fix Travis config (Bruno Tavares) +- [[`c4dc33a69a`](https://github.com/datrs/hypercore/commit/c4dc33a69aeead974d7dbd35d8414016ea3e421b)] Bump versions to latest versions (Bruno Tavares) +- [[`ac3790dd4d`](https://github.com/datrs/hypercore/commit/ac3790dd4da0c72341944f29a75a8bf1fefcae00)] Bump versions to latest versions (Bruno Tavares) +- [[`a3aa858b61`](https://github.com/datrs/hypercore/commit/a3aa858b61f36b30d02f06976eebbb37d823aa81)] Update sparse-bitfield requirement from 0.10.0 to 0.11.0 (dependabot-preview[bot]) +- [[`97cf996831`](https://github.com/datrs/hypercore/commit/97cf996831d00626a6ea75cc5267d5974bbca573)] Update changelog (Bruno Tavares) + +### Stats +```diff + .travis.yml | 8 ++-- + CHANGELOG.md | 28 +++++++++++++- + Cargo.toml | 34 ++++++++-------- + examples/iter.rs | 6 +-- + src/audit.rs | 8 ++-- + src/bitfield/iterator.rs | 37 +++++++++--------- + src/bitfield/mod.rs | 100 ++++++++++++++++++++++++------------------------ + src/crypto/hash.rs | 12 +++--- + src/crypto/key_pair.rs | 13 +++--- + src/crypto/root.rs | 62 +++++++++++++++--------------- + src/feed.rs | 48 +++++++++++------------ + src/proof.rs | 4 +- + src/replicate/message.rs | 4 +- + src/replicate/peer.rs | 4 +- + src/storage/mod.rs | 42 ++++++++++---------- + src/storage/node.rs | 16 ++++---- + src/storage/persist.rs | 4 +- + tests/bitfield.rs | 8 ++-- + tests/model.rs | 12 +++--- + 19 files changed, 243 insertions(+), 207 deletions(-) +``` + + +## 2020-02-19, Version 0.11.0 +### Commits +- [[`f2baf805d5`](https://github.com/datrs/hypercore/commit/f2baf805d5477c768f32ca2cf7faae4d9d284686)] (cargo-release) version 0.11.0 (Bruno Tavares) +- [[`31dfdd15f2`](https://github.com/datrs/hypercore/commit/31dfdd15f27356780d75fa126bd8a8d464fefc39)] Merge pull request #95 from bltavares/send (Bruno Tavares) +- [[`46be5197a2`](https://github.com/datrs/hypercore/commit/46be5197a2398e04d413ebfa65fcb6f830dedf0f)] Use published version (Bruno Tavares) +- [[`d4905b11cf`](https://github.com/datrs/hypercore/commit/d4905b11cf83871db98c118c373d52626e6b1c78)] Point to merkle-tree-stream that is Send while new version is to be released (Bruno Tavares) +- [[`40caf92ec2`](https://github.com/datrs/hypercore/commit/40caf92ec2c357a08ddeec03f9d4ba34a723eeaf)] Replace all Rc with Arc in code. Needs to update dependencies (Bruno Tavares) +- [[`2dc8008a55`](https://github.com/datrs/hypercore/commit/2dc8008a5542713a2569cfb115a006dee34bbca6)] example to ensure structs are send (Bruno Tavares) +- [[`f77fe7b025`](https://github.com/datrs/hypercore/commit/f77fe7b0257bd5f0e7007c012bc68bc1d75eda05)] fix readme link (#88) (nasa) +- [[`82e48f0c7d`](https://github.com/datrs/hypercore/commit/82e48f0c7d2330f0ed845dac30db46a02d5f7c48)] Update memory-pager requirement from 0.8.0 to 0.9.0 (dependabot-preview[bot]) +- [[`580dff64c5`](https://github.com/datrs/hypercore/commit/580dff64c50377e6fc51dbed701c2dc26a2693a2)] Update sparse-bitfield requirement from 0.8.1 to 0.10.0 (dependabot-preview[bot]) +- [[`7eda3504d6`](https://github.com/datrs/hypercore/commit/7eda3504d61de0f1423d0efa272587fe8b0a1650)] Merge pull request #81 from bltavares/discovery-key-hash (Szabolcs Berecz) +- [[`1edf42f790`](https://github.com/datrs/hypercore/commit/1edf42f79007924b79e7b1b99a7e9d66abc3b4e9)] Implements discoveryKey from hypercore-crypto (Bruno Tavares) +- [[`aedef0b149`](https://github.com/datrs/hypercore/commit/aedef0b149de042313245c2baab0948da3390aef)] Update changelog (Yoshua Wuyts) + +### Stats +```diff + CHANGELOG.md | 26 ++++++++++++++++++++++++++ + Cargo.toml | 9 +++++---- + README.md | 2 +- + examples/async.rs | 30 ++++++++++++++++++++++++++++++ + src/crypto/hash.rs | 42 +++++++++++++++++++++++++++++------------- + src/crypto/merkle.rs | 10 +++++----- + src/feed.rs | 4 ++-- + 7 files changed, 98 insertions(+), 25 deletions(-) +``` + + +## 2018-12-22, Version 0.9.0 +### Commits +- [[`9c2b07fca6`](https://github.com/datrs/hypercore/commit/9c2b07fca68bb34046551f0fd152aa7f97a33fb6)] (cargo-release) version 0.9.0 (Yoshua Wuyts) +- [[`86e241f9e0`](https://github.com/datrs/hypercore/commit/86e241f9e02e3583445fcb43fcc28295eae1cd31)] 🙋 Implement feed auditing (#55) (Tim Deeb-Swihart) +- [[`5840a3a6a9`](https://github.com/datrs/hypercore/commit/5840a3a6a90f47ba89662687a374f070f3172c69)] Update rand requirement from 0.5.5 to 0.6.0 (#49) (dependabot[bot]) +- [[`1628057868`](https://github.com/datrs/hypercore/commit/162805786831866ea611cfe97e85def690614fa6)] use tree_index functions (#48) (Yoshua Wuyts) +- [[`f66fbb3543`](https://github.com/datrs/hypercore/commit/f66fbb354376681062697ffd2be18da2224cb1b9)] Update merkle-tree-stream requirement from 0.7.0 to 0.8.0 (#46) (dependabot[bot]) +- [[`343df6f991`](https://github.com/datrs/hypercore/commit/343df6f991b0fbe5f50a7d95b632b3c60e5dfa54)] Update changelog (Yoshua Wuyts) + +### Stats +```diff + CHANGELOG.md | 26 +++++++++++++++++++- + Cargo.toml | 6 ++-- + src/audit.rs | 20 +++++++++++++++- + src/bitfield/mod.rs | 21 ++++++++++------ + src/crypto/key_pair.rs | 2 +- + src/crypto/merkle.rs | 14 ++++++---- + src/feed.rs | 46 ++++++++++++++++++++++++++++------ + src/lib.rs | 1 +- + src/storage/mod.rs | 10 +++++-- + src/storage/node.rs | 2 +- + tests/feed.rs | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++- + 11 files changed, 191 insertions(+), 26 deletions(-) +``` + + +## 2018-12-22, Version 0.9.0 +### Commits +- [[`9c2b07fca6`](https://github.com/datrs/hypercore/commit/9c2b07fca68bb34046551f0fd152aa7f97a33fb6)] (cargo-release) version 0.9.0 (Yoshua Wuyts) +- [[`86e241f9e0`](https://github.com/datrs/hypercore/commit/86e241f9e02e3583445fcb43fcc28295eae1cd31)] 🙋 Implement feed auditing (#55) (Tim Deeb-Swihart) +- [[`5840a3a6a9`](https://github.com/datrs/hypercore/commit/5840a3a6a90f47ba89662687a374f070f3172c69)] Update rand requirement from 0.5.5 to 0.6.0 (#49) (dependabot[bot]) +- [[`1628057868`](https://github.com/datrs/hypercore/commit/162805786831866ea611cfe97e85def690614fa6)] use tree_index functions (#48) (Yoshua Wuyts) +- [[`f66fbb3543`](https://github.com/datrs/hypercore/commit/f66fbb354376681062697ffd2be18da2224cb1b9)] Update merkle-tree-stream requirement from 0.7.0 to 0.8.0 (#46) (dependabot[bot]) +- [[`343df6f991`](https://github.com/datrs/hypercore/commit/343df6f991b0fbe5f50a7d95b632b3c60e5dfa54)] Update changelog (Yoshua Wuyts) + +### Stats +```diff + CHANGELOG.md | 26 +++++++++++++++++++- + Cargo.toml | 6 ++-- + src/audit.rs | 20 +++++++++++++++- + src/bitfield/mod.rs | 21 ++++++++++------ + src/crypto/key_pair.rs | 2 +- + src/crypto/merkle.rs | 14 ++++++---- + src/feed.rs | 46 ++++++++++++++++++++++++++++------ + src/lib.rs | 1 +- + src/storage/mod.rs | 10 +++++-- + src/storage/node.rs | 2 +- + tests/feed.rs | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++- + 11 files changed, 191 insertions(+), 26 deletions(-) +``` + + +## 2018-10-28, Version 0.8.1 +### Commits +- [[`938d2816cc`](https://github.com/datrs/hypercore/commit/938d2816cc63e4dd8964139baa56be2dd28e72d5)] (cargo-release) version 0.8.1 (Yoshua Wuyts) +- [[`79fd7a8141`](https://github.com/datrs/hypercore/commit/79fd7a8141096606b4124c7d59dede2a4021b3fb)] Stricter lints (#45) (Yoshua Wuyts) +- [[`96b3af825d`](https://github.com/datrs/hypercore/commit/96b3af825ddc5c69364fe92c71d8498f4a00a2dc)] use spec compatible constants (#44) (Yoshua Wuyts) +- [[`ac8ef53b0c`](https://github.com/datrs/hypercore/commit/ac8ef53b0cd45f0b935ab83dde4f750eb91a07e8)] Update changelog (Yoshua Wuyts) + +### Stats +```diff + CHANGELOG.md | 33 +++++++++++++++++++++++++++++++++ + Cargo.toml | 10 +++++++++- + src/crypto/hash.rs | 8 ++++---- + src/crypto/key_pair.rs | 2 +- + src/crypto/merkle.rs | 4 ++-- + src/feed.rs | 20 ++++++++++---------- + src/feed_builder.rs | 10 +++++----- + src/lib.rs | 19 +++++++++++-------- + src/prelude.rs | 4 ++-- + src/proof.rs | 4 ++-- + src/storage/mod.rs | 2 +- + src/storage/node.rs | 4 ++-- + src/storage/persist.rs | 2 +- + 13 files changed, 83 insertions(+), 39 deletions(-) +``` + + +## 2018-10-18, Version 0.8.0 +### Commits +- [[`048921b077`](https://github.com/datrs/hypercore/commit/048921b077d02963e70a881fa780e6e96c347d50)] (cargo-release) version 0.8.0 (Yoshua Wuyts) +- [[`54ceb55e7b`](https://github.com/datrs/hypercore/commit/54ceb55e7bf6c5c037b3849c53bc082bc57e0ee4)] travis master only builds (Yoshua Wuyts) +- [[`1a06b5862d`](https://github.com/datrs/hypercore/commit/1a06b5862d371120dc2e1695e5d1764721707e29)] upgrade (#43) (Yoshua Wuyts) +- [[`2fda376767`](https://github.com/datrs/hypercore/commit/2fda376767efe3b61fe2f3bc46a431340cf984a2)] tests/helpers -> tests/common (#38) (Yoshua Wuyts) +- [[`d48e5570fa`](https://github.com/datrs/hypercore/commit/d48e5570fa659b38519a54288b6019205cb48276)] Keep up with modern times in clippy invocation (#35) (Szabolcs Berecz) +- [[`a62a21b249`](https://github.com/datrs/hypercore/commit/a62a21b24953f6b1da5cfc902abef6914f0b7950)] Update quickcheck requirement from 0.6.2 to 0.7.1 (#33) (Szabolcs Berecz) +- [[`3bbe87db8d`](https://github.com/datrs/hypercore/commit/3bbe87db8d448e8fbc7a73a99b07ff39ec09c1e9)] Update changelog (Yoshua Wuyts) + +### Stats +```diff + .github/ISSUE_TEMPLATE.md | 40 +++--------------------------- + .github/ISSUE_TEMPLATE/bug_report.md | 23 +++++++++++++++++- + .github/ISSUE_TEMPLATE/feature_request.md | 43 ++++++++++++++++++++++++++++++++- + .github/ISSUE_TEMPLATE/question.md | 18 +++++++++++++- + .travis.yml | 24 +++++++++--------- + CHANGELOG.md | 25 +++++++++++++++++++- + Cargo.toml | 28 ++++++++++----------- + README.md | 23 +++++++++++++++-- + src/feed.rs | 2 +- + src/lib.rs | 23 +++++++++++++---- + src/replicate/peer.rs | 2 +- + src/storage/mod.rs | 2 +- + tests/common/mod.rs | 15 +++++++++++- + tests/feed.rs | 29 +++++++++++++++++++--- + tests/helpers.rs | 34 +------------------------- + tests/model.rs | 6 ++-- + tests/regression.rs | 4 +-- + 17 files changed, 232 insertions(+), 109 deletions(-) +``` + + +## 2018-09-03, Version 0.7.1 +### Commits +- [[`43ad5d3c9a`](https://github.com/datrs/hypercore/commit/43ad5d3c9accd9e4faa63fc5fe35b5c74997d503)] (cargo-release) version 0.7.1 (Yoshua Wuyts) +- [[`cb2cfac275`](https://github.com/datrs/hypercore/commit/cb2cfac2757a50600886251b608ab349bdc6daf4)] Update ed25519_dalek to 0.8 and rand to 0.5 (#30) (Luiz Irber) +- [[`ade97ddfe3`](https://github.com/datrs/hypercore/commit/ade97ddfe3310edbff11057740ebd03ed73075b4)] Update memory-pager requirement from 0.7.0 to 0.8.0 (dependabot[bot]) +- [[`420a3b19b0`](https://github.com/datrs/hypercore/commit/420a3b19b0daa7d32d96c3c67045adab10c0f38d)] Upgrade random-access-storage (#26) (Szabolcs Berecz) +- [[`7421f677eb`](https://github.com/datrs/hypercore/commit/7421f677eb200cfa2cceb98c027408e29cc526ee)] update changelog (Yoshua Wuyts) + +### Stats +```diff + CHANGELOG.md | 26 ++++++++++++++++++++++++++ + Cargo.toml | 14 +++++++------- + benches/bench.rs | 8 +++----- + src/crypto/key_pair.rs | 4 ++-- + src/feed.rs | 16 ++++++++-------- + src/feed_builder.rs | 6 +++--- + src/storage/mod.rs | 40 ++++++++++++++++++++-------------------- + src/storage/persist.rs | 4 ++-- + tests/compat.rs | 6 +++--- + tests/feed.rs | 10 +++++----- + tests/helpers.rs | 8 ++++---- + 11 files changed, 83 insertions(+), 59 deletions(-) +``` + + +## 2018-08-25, Version 0.7.0 +### Commits +- [[`c4c5986191`](https://github.com/datrs/hypercore/commits/c4c5986191ab9dc07443264c65d0f2edc6971439)] (cargo-release) version 0.7.0 (Yoshua Wuyts) +- [[`7d6bde061c`](https://github.com/datrs/hypercore/commits/7d6bde061c6724a216f59ecd90970722b0c0f118)] Storage: implement keypair read/write (#18) +- [[`d027f37ed8`](https://github.com/datrs/hypercore/commits/d027f37ed8aa5c9a487a7e0260fa1ca0cd089011)] Update sparse-bitfield requirement from 0.4.0 to 0.8.0 (#20) +- [[`5d9b05f029`](https://github.com/datrs/hypercore/commits/5d9b05f029f2e1427770c4169794ce1cccd70ec5)] Update memory-pager requirement from 0.4.5 to 0.7.0 +- [[`73a3f28e26`](https://github.com/datrs/hypercore/commits/73a3f28e26957c627254ed024092df7ae057d277)] Update sleep-parser requirement from 0.4.0 to 0.6.0 +- [[`566b7a1021`](https://github.com/datrs/hypercore/commits/566b7a1021a36e7dc82ca22091ee21df88870d57)] Upgrade to latest random-access-storage (#17) +- [[`e086e60942`](https://github.com/datrs/hypercore/commits/e086e609428d015bc831384ff3e16a8c9a295bc7)] Add rustfmt back to travis (#19) +- [[`eb5edfba43`](https://github.com/datrs/hypercore/commits/eb5edfba438f8617d076f3a3f95636dfd3cc29ad)] (cargo-release) start next development iteration 0.6.1-alpha.0 (Yoshua Wuyts) + +### Stats +```diff + .travis.yml | 1 +- + Cargo.toml | 14 ++++++------ + src/bitfield/mod.rs | 9 +++----- + src/feed.rs | 49 +++++++++++++++++++++++++++++-------------- + src/feed_builder.rs | 3 ++- + src/lib.rs | 2 +- + src/storage/mod.rs | 62 +++++++++++++++++++++++++++++++++++++++++++++++++----- + tests/compat.rs | 7 +++--- + tests/feed.rs | 32 ++++++++++++++++++++++++++++- + tests/helpers.rs | 2 +- + tests/storage.rs | 54 +++++++++++++++++++++++++++++++++++++++++++++++- + 11 files changed, 197 insertions(+), 38 deletions(-) +``` diff --git a/vendor/hypercore/Cargo.toml b/vendor/hypercore/Cargo.toml new file mode 100644 index 00000000..d85ae45e --- /dev/null +++ b/vendor/hypercore/Cargo.toml @@ -0,0 +1,175 @@ +# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO +# +# When uploading crates to the registry Cargo will automatically +# "normalize" Cargo.toml files for maximal compatibility +# with all versions of Cargo and also rewrite `path` dependencies +# to registry (e.g., crates.io) dependencies. +# +# If you are reading this file be aware that the original Cargo.toml +# will likely look very different (and much more reasonable). +# See Cargo.toml.orig for the original contents. + +[package] +edition = "2021" +name = "hypercore" +version = "0.12.1" +authors = [ + "Yoshua Wuyts ", + "Timo Tiuraniemi ", +] +description = "Secure, distributed, append-only log" +documentation = "https://docs.rs/hypercore" +readme = "README.md" +keywords = [ + "dat", + "p2p", + "stream", + "feed", + "merkle", +] +categories = [ + "asynchronous", + "concurrency", + "cryptography", + "data-structures", + "encoding", +] +license = "MIT OR Apache-2.0" +repository = "https://github.com/datrs/hypercore" + +[[bench]] +name = "memory" +harness = false + +[[bench]] +name = "disk" +harness = false + +[dependencies.blake2] +version = "0.10" + +[dependencies.byteorder] +version = "1" + +[dependencies.compact-encoding] +version = "1" + +[dependencies.crc32fast] +version = "1" + +[dependencies.ed25519-dalek] +version = "2" +features = ["rand_core"] + +[dependencies.flat-tree] +version = "6" + +[dependencies.futures] +version = "0.3" + +[dependencies.getrandom] +version = "0.2" +features = ["js"] + +[dependencies.intmap] +version = "2" + +[dependencies.merkle-tree-stream] +version = "0.12" + +[dependencies.moka] +version = "0.12" +features = ["sync"] +optional = true + +[dependencies.pretty-hash] +version = "0.4" + +[dependencies.rand] +version = "0.8" + +[dependencies.random-access-memory] +version = "3" + +[dependencies.random-access-storage] +version = "5" + +[dependencies.sha2] +version = "0.10" + +[dependencies.thiserror] +version = "1" + +[dependencies.tracing] +version = "0.1" + +[dev-dependencies.anyhow] +version = "1.0.70" + +[dev-dependencies.async-std] +version = "1.12.0" +features = ["attributes"] + +[dev-dependencies.criterion] +version = "0.4" +features = [ + "async_std", + "async_tokio", +] + +[dev-dependencies.data-encoding] +version = "2.2.0" + +[dev-dependencies.proptest] +version = "1.1.0" + +[dev-dependencies.proptest-derive] +version = "0.2.0" + +[dev-dependencies.remove_dir_all] +version = "0.7.0" + +[dev-dependencies.sha2] +version = "0.10" + +[dev-dependencies.tempfile] +version = "3.1.0" + +[dev-dependencies.test-log] +version = "0.2.11" +features = ["trace"] +default-features = false + +[dev-dependencies.tokio] +version = "1.27.0" +features = [ + "macros", + "rt", + "rt-multi-thread", +] +default-features = false + +[dev-dependencies.tokio-test] +version = "0.4" + +[dev-dependencies.tracing-subscriber] +version = "0.3.16" +features = [ + "env-filter", + "fmt", +] + +[features] +async-std = ["random-access-disk/async-std"] +cache = ["moka"] +default = [ + "async-std", + "sparse", +] +js_interop_tests = [] +sparse = ["random-access-disk/sparse"] +tokio = ["random-access-disk/tokio"] + +[target."cfg(not(target_arch = \"wasm32\"))".dependencies.random-access-disk] +version = "3" +default-features = false diff --git a/vendor/hypercore/LICENSE-APACHE b/vendor/hypercore/LICENSE-APACHE new file mode 100644 index 00000000..6ab06963 --- /dev/null +++ b/vendor/hypercore/LICENSE-APACHE @@ -0,0 +1,190 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2018 Yoshua Wuyts + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/hypercore/LICENSE-MIT b/vendor/hypercore/LICENSE-MIT new file mode 100644 index 00000000..c7509bad --- /dev/null +++ b/vendor/hypercore/LICENSE-MIT @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2018 Yoshua Wuyts + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/vendor/hypercore/README.md b/vendor/hypercore/README.md new file mode 100644 index 00000000..a95afaeb --- /dev/null +++ b/vendor/hypercore/README.md @@ -0,0 +1,105 @@ +# Hypercore +[![crates.io version][1]][2] [![build status][3]][4] +[![downloads][5]][6] [![docs.rs docs][7]][8] + +Hypercore is a secure, distributed append-only log. This crate is a limited Rust +port of the original Javascript +[holepunchto/hypercore](https://github.com/holepunchto/hypercore). The goal is to +maintain binary compatibility with the LTS version with regards to disk storage. + +See [hypercore-protocol-rs](https://github.com/datrs/hypercore-protocol-rs) for the +corresponding wire protocol implementation. + +- [Documentation][8] +- [Crates.io][2] + +## Features + +- [x] Create [in-memory](https://github.com/datrs/random-access-memory) and [disk](https://github.com/datrs/random-access-disk) hypercores +- [x] Append to hypercore either a single entry or a batch of entries +- [x] Get entries from hypercore +- [x] Clear range from hypercore, with optional support for sparse files +- [x] Support basic replication by creating proofs in a source hypercore and verifying and applying them to a destination hypercore +- [x] Support `tokio` or `async-std` runtimes +- [x] Support WASM for in-memory storage +- [x] Test Javascript interoperability for supported features +- [x] Add optional read cache +- [ ] Support the new [manifest](https://github.com/holepunchto/hypercore/blob/main/lib/manifest.js) in the wire protocol to remain compatible with upcoming v11 +- [ ] Finalize documentation and release v1.0.0 + +## Usage + +```rust +// Create an in-memory hypercore using a builder +let mut hypercore = HypercoreBuilder::new(Storage::new_memory().await.unwrap()) + .build() + .await + .unwrap(); + +// Append entries to the log +hypercore.append(b"Hello, ").await.unwrap(); +hypercore.append(b"world!").await.unwrap(); + +// Read entries from the log +assert_eq!(hypercore.get(0).await.unwrap().unwrap(), b"Hello, "); +assert_eq!(hypercore.get(1).await.unwrap().unwrap(), b"world!"); +``` + +Find more examples in the [examples](./examples) folder, and/or run: + +```bash +cargo run --example memory +cargo run --example disk +cargo run --example replication +``` + +## Installation + +```bash +cargo add hypercore +``` + +## Safety + +This crate uses ``#![forbid(unsafe_code)]`` to ensure everythong is implemented in +100% Safe Rust. + +## Development + +To test interoperability with Javascript, enable the `js_interop_tests` feature: + +```bash +cargo test --features js_interop_tests +``` + +Run benches with: + +```bash +cargo bench +``` + +## Contributing + +Want to join us? Check out our ["Contributing" guide][contributing] and take a +look at some of these issues: + +- [Issues labeled "good first issue"][good-first-issue] +- [Issues labeled "help wanted"][help-wanted] + +## License + +[MIT](./LICENSE-MIT) OR [Apache-2.0](./LICENSE-APACHE) + +[1]: https://img.shields.io/crates/v/hypercore.svg?style=flat-square +[2]: https://crates.io/crates/hypercore +[3]: https://github.com/datrs/hypercore/actions/workflows/ci.yml/badge.svg +[4]: https://github.com/datrs/hypercore/actions +[5]: https://img.shields.io/crates/d/hypercore.svg?style=flat-square +[6]: https://crates.io/crates/hypercore +[7]: https://img.shields.io/badge/docs-latest-blue.svg?style=flat-square +[8]: https://docs.rs/hypercore + +[releases]: https://github.com/datrs/hypercore/releases +[contributing]: https://github.com/datrs/hypercore/blob/master/.github/CONTRIBUTING.md +[good-first-issue]: https://github.com/datrs/hypercore/labels/good%20first%20issue +[help-wanted]: https://github.com/datrs/hypercore/labels/help%20wanted diff --git a/vendor/hypercore/benches/disk.rs b/vendor/hypercore/benches/disk.rs new file mode 100644 index 00000000..326f57b3 --- /dev/null +++ b/vendor/hypercore/benches/disk.rs @@ -0,0 +1,140 @@ +use std::time::{Duration, Instant}; + +#[cfg(feature = "async-std")] +use criterion::async_executor::AsyncStdExecutor; +use criterion::{black_box, criterion_group, criterion_main, Criterion}; +use hypercore::{Hypercore, HypercoreBuilder, HypercoreError, Storage}; +use random_access_disk::RandomAccessDisk; +use tempfile::Builder as TempfileBuilder; + +fn bench_create_disk(c: &mut Criterion) { + let mut group = c.benchmark_group("slow_call"); + group.measurement_time(Duration::from_secs(20)); + + #[cfg(feature = "async-std")] + group.bench_function("create_disk", move |b| { + b.to_async(AsyncStdExecutor) + .iter(|| create_hypercore("create")); + }); + #[cfg(feature = "tokio")] + group.bench_function("create_disk", move |b| { + let rt = tokio::runtime::Runtime::new().unwrap(); + b.to_async(&rt).iter(|| create_hypercore("create")); + }); +} + +#[cfg(feature = "cache")] +async fn create_hypercore(name: &str) -> Result, HypercoreError> { + let dir = TempfileBuilder::new() + .prefix(name) + .tempdir() + .unwrap() + .into_path(); + let storage = Storage::new_disk(&dir, true).await?; + HypercoreBuilder::new(storage) + .node_cache_options(hypercore::CacheOptionsBuilder::new()) + .build() + .await +} + +#[cfg(not(feature = "cache"))] +async fn create_hypercore(name: &str) -> Result, HypercoreError> { + let dir = TempfileBuilder::new() + .prefix(name) + .tempdir() + .unwrap() + .into_path(); + let storage = Storage::new_disk(&dir, true).await?; + HypercoreBuilder::new(storage).build().await +} + +fn bench_write_disk(c: &mut Criterion) { + let mut group = c.benchmark_group("slow_call"); + group.measurement_time(Duration::from_secs(20)); + + #[cfg(feature = "async-std")] + group.bench_function("write disk", |b| { + b.to_async(AsyncStdExecutor).iter_custom(write_disk); + }); + #[cfg(feature = "tokio")] + group.bench_function("write disk", |b| { + let rt = tokio::runtime::Runtime::new().unwrap(); + b.to_async(&rt).iter_custom(write_disk); + }); +} + +async fn write_disk(iters: u64) -> Duration { + let mut hypercore = create_hypercore("write").await.unwrap(); + let data = Vec::from("hello"); + let start = Instant::now(); + for _ in 0..iters { + black_box(hypercore.append(&data).await.unwrap()); + } + start.elapsed() +} + +fn bench_read_disk(c: &mut Criterion) { + let mut group = c.benchmark_group("slow_call"); + group.measurement_time(Duration::from_secs(20)); + + #[cfg(feature = "async-std")] + group.bench_function("read disk", |b| { + b.to_async(AsyncStdExecutor).iter_custom(read_disk); + }); + #[cfg(feature = "tokio")] + group.bench_function("read disk", |b| { + let rt = tokio::runtime::Runtime::new().unwrap(); + b.to_async(&rt).iter_custom(read_disk); + }); +} + +async fn read_disk(iters: u64) -> Duration { + let mut hypercore = create_hypercore("read").await.unwrap(); + let data = Vec::from("hello"); + for _ in 0..iters { + hypercore.append(&data).await.unwrap(); + } + let start = Instant::now(); + for i in 0..iters { + black_box(hypercore.get(i).await.unwrap()); + } + start.elapsed() +} + +fn bench_clear_disk(c: &mut Criterion) { + let mut group = c.benchmark_group("slow_call"); + group.measurement_time(Duration::from_secs(20)); + + #[cfg(feature = "async-std")] + group.bench_function("clear disk", |b| { + b.to_async(AsyncStdExecutor).iter_custom(clear_disk); + }); + #[cfg(feature = "tokio")] + group.bench_function("clear disk", |b| { + let rt = tokio::runtime::Runtime::new().unwrap(); + b.to_async(&rt).iter_custom(clear_disk); + }); +} + +#[allow(clippy::unit_arg)] +async fn clear_disk(iters: u64) -> Duration { + let mut hypercore = create_hypercore("clear").await.unwrap(); + let data = Vec::from("hello"); + for _ in 0..iters { + hypercore.append(&data).await.unwrap(); + } + let start = Instant::now(); + for i in 0..iters { + black_box(hypercore.clear(i, 1).await.unwrap()); + } + start.elapsed() +} + +criterion_group!( + benches, + bench_create_disk, + bench_write_disk, + bench_read_disk, + bench_clear_disk +); +criterion_main!(benches); diff --git a/vendor/hypercore/benches/memory.rs b/vendor/hypercore/benches/memory.rs new file mode 100644 index 00000000..b439b1e1 --- /dev/null +++ b/vendor/hypercore/benches/memory.rs @@ -0,0 +1,128 @@ +use std::time::{Duration, Instant}; + +#[cfg(feature = "async-std")] +use criterion::async_executor::AsyncStdExecutor; +use criterion::{black_box, criterion_group, criterion_main, Criterion}; +use hypercore::{Hypercore, HypercoreBuilder, HypercoreError, Storage}; +use random_access_memory::RandomAccessMemory; + +fn bench_create_memory(c: &mut Criterion) { + #[cfg(feature = "async-std")] + c.bench_function("create memory", |b| { + b.to_async(AsyncStdExecutor).iter(|| create_hypercore(1024)); + }); + #[cfg(feature = "tokio")] + c.bench_function("create memory", |b| { + let rt = tokio::runtime::Runtime::new().unwrap(); + b.to_async(&rt).iter(|| create_hypercore(1024)); + }); +} + +#[cfg(feature = "cache")] +async fn create_hypercore( + page_size: usize, +) -> Result, HypercoreError> { + let storage = Storage::open( + |_| Box::pin(async move { Ok(RandomAccessMemory::new(page_size)) }), + false, + ) + .await?; + HypercoreBuilder::new(storage) + .node_cache_options(hypercore::CacheOptionsBuilder::new()) + .build() + .await +} + +#[cfg(not(feature = "cache"))] +async fn create_hypercore( + page_size: usize, +) -> Result, HypercoreError> { + let storage = Storage::open( + |_| Box::pin(async move { Ok(RandomAccessMemory::new(page_size)) }), + false, + ) + .await?; + HypercoreBuilder::new(storage).build().await +} + +fn bench_write_memory(c: &mut Criterion) { + #[cfg(feature = "async-std")] + c.bench_function("write memory", |b| { + b.to_async(AsyncStdExecutor).iter_custom(write_memory); + }); + #[cfg(feature = "tokio")] + c.bench_function("write memory", |b| { + let rt = tokio::runtime::Runtime::new().unwrap(); + b.to_async(&rt).iter_custom(write_memory); + }); +} + +async fn write_memory(iters: u64) -> Duration { + let mut hypercore = create_hypercore(1024).await.unwrap(); + let data = Vec::from("hello"); + let start = Instant::now(); + for _ in 0..iters { + black_box(hypercore.append(&data).await.unwrap()); + } + start.elapsed() +} + +fn bench_read_memory(c: &mut Criterion) { + #[cfg(feature = "async-std")] + c.bench_function("read memory", |b| { + b.to_async(AsyncStdExecutor).iter_custom(read_memory); + }); + #[cfg(feature = "tokio")] + c.bench_function("read memory", |b| { + let rt = tokio::runtime::Runtime::new().unwrap(); + b.to_async(&rt).iter_custom(read_memory); + }); +} + +async fn read_memory(iters: u64) -> Duration { + let mut hypercore = create_hypercore(1024).await.unwrap(); + let data = Vec::from("hello"); + for _ in 0..iters { + hypercore.append(&data).await.unwrap(); + } + let start = Instant::now(); + for i in 0..iters { + black_box(hypercore.get(i).await.unwrap()); + } + start.elapsed() +} + +fn bench_clear_memory(c: &mut Criterion) { + #[cfg(feature = "async-std")] + c.bench_function("clear memory", |b| { + b.to_async(AsyncStdExecutor).iter_custom(clear_memory); + }); + #[cfg(feature = "tokio")] + c.bench_function("clear memory", |b| { + let rt = tokio::runtime::Runtime::new().unwrap(); + b.to_async(&rt).iter_custom(clear_memory); + }); +} + +#[allow(clippy::unit_arg)] +async fn clear_memory(iters: u64) -> Duration { + let mut hypercore = create_hypercore(1024).await.unwrap(); + let data = Vec::from("hello"); + for _ in 0..iters { + hypercore.append(&data).await.unwrap(); + } + let start = Instant::now(); + for i in 0..iters { + black_box(hypercore.clear(i, 1).await.unwrap()); + } + start.elapsed() +} + +criterion_group!( + benches, + bench_create_memory, + bench_write_memory, + bench_read_memory, + bench_clear_memory +); +criterion_main!(benches); diff --git a/vendor/hypercore/examples/disk.rs b/vendor/hypercore/examples/disk.rs new file mode 100644 index 00000000..99990897 --- /dev/null +++ b/vendor/hypercore/examples/disk.rs @@ -0,0 +1,88 @@ +#[cfg(feature = "async-std")] +use async_std::main as async_main; +use hypercore::{HypercoreBuilder, HypercoreError, Storage}; +use tempfile::Builder; +#[cfg(feature = "tokio")] +use tokio::main as async_main; + +/// Example about using an in-memory hypercore. +#[async_main] +async fn main() { + // For the purposes of this example, first create a + // temporary directory to hold hypercore. + let dir = Builder::new() + .prefix("examples_disk") + .tempdir() + .unwrap() + .into_path(); + + // Create a disk storage, overwriting existing values. + let overwrite = true; + let storage = Storage::new_disk(&dir, overwrite) + .await + .expect("Could not create disk storage"); + + // Build a new disk hypercore + let mut hypercore = HypercoreBuilder::new(storage) + .build() + .await + .expect("Could not create disk hypercore"); + + // Append values to the hypercore + hypercore.append(b"Hello, ").await.unwrap(); + hypercore.append(b"from ").await.unwrap(); + + // Close hypercore + drop(hypercore); + + // Open hypercore again from same directory, not + // overwriting. + let overwrite = false; + let storage = Storage::new_disk(&dir, overwrite) + .await + .expect("Could not open existing disk storage"); + let mut hypercore = HypercoreBuilder::new(storage) + .open(true) + .build() + .await + .expect("Could not open disk hypercore"); + + // Append new values to the hypercore + hypercore.append(b"disk hypercore!").await.unwrap(); + + // Add three values and clear the first two + let batch: &[&[u8]] = &[ + b"first value to clear", + b"second value to clear", + b"third value to keep", + ]; + let new_length = hypercore.append_batch(batch).await.unwrap().length; + hypercore + .clear(new_length - 3, new_length - 1) + .await + .unwrap(); + + // The two values return None, but the last one returns correctly + assert!(hypercore.get(3).await.unwrap().is_none()); + assert!(hypercore.get(4).await.unwrap().is_none()); + assert_eq!( + hypercore.get(5).await.unwrap().unwrap(), + b"third value to keep" + ); + + // Print the first three values, converting binary back to string + println!( + "{}{}{}", + format_res(hypercore.get(0).await), + format_res(hypercore.get(1).await), + format_res(hypercore.get(2).await) + ); // prints "Hello, from disk hypercore!" +} + +fn format_res(res: Result>, HypercoreError>) -> String { + match res { + Ok(Some(bytes)) => String::from_utf8(bytes).expect("Shouldn't fail in example"), + Ok(None) => "Got None in feed".to_string(), + Err(e) => format!("Error getting value from feed, reason = {e:?}"), + } +} diff --git a/vendor/hypercore/examples/memory.rs b/vendor/hypercore/examples/memory.rs new file mode 100644 index 00000000..a510ed6d --- /dev/null +++ b/vendor/hypercore/examples/memory.rs @@ -0,0 +1,59 @@ +#[cfg(feature = "async-std")] +use async_std::main as async_main; +use hypercore::{HypercoreBuilder, HypercoreError, Storage}; +#[cfg(feature = "tokio")] +use tokio::main as async_main; + +/// Example about using an in-memory hypercore. +#[async_main] +async fn main() { + // Create a memory storage + let storage = Storage::new_memory() + .await + .expect("Could not create memory storage"); + + // Build hypercore + let mut hypercore = HypercoreBuilder::new(storage) + .build() + .await + .expect("Could not create memory hypercore"); + + // Append values + hypercore.append(b"Hello, ").await.unwrap(); + hypercore.append(b"from memory hypercore!").await.unwrap(); + + // Add three values and clear the first two + let batch: &[&[u8]] = &[ + b"first value to clear", + b"second value to clear", + b"third value to keep", + ]; + let new_length = hypercore.append_batch(batch).await.unwrap().length; + hypercore + .clear(new_length - 3, new_length - 1) + .await + .unwrap(); + + // The two values return None, but the last one returns correctly + assert!(hypercore.get(2).await.unwrap().is_none()); + assert!(hypercore.get(3).await.unwrap().is_none()); + assert_eq!( + hypercore.get(4).await.unwrap().unwrap(), + b"third value to keep" + ); + + // Print values, converting binary back to string + println!( + "{}{}", + format_res(hypercore.get(0).await), + format_res(hypercore.get(1).await) + ); // prints "Hello, from memory hypercore!" +} + +fn format_res(res: Result>, HypercoreError>) -> String { + match res { + Ok(Some(bytes)) => String::from_utf8(bytes).expect("Shouldn't fail in example"), + Ok(None) => "Got None in feed".to_string(), + Err(e) => format!("Error getting value from feed, reason = {e:?}"), + } +} diff --git a/vendor/hypercore/examples/replication.rs b/vendor/hypercore/examples/replication.rs new file mode 100644 index 00000000..52c205ac --- /dev/null +++ b/vendor/hypercore/examples/replication.rs @@ -0,0 +1,116 @@ +#[cfg(feature = "async-std")] +use async_std::main as async_main; +use hypercore::{ + Hypercore, HypercoreBuilder, HypercoreError, PartialKeypair, RequestBlock, RequestUpgrade, + Storage, +}; +use random_access_disk::RandomAccessDisk; +use random_access_memory::RandomAccessMemory; +use tempfile::Builder; +#[cfg(feature = "tokio")] +use tokio::main as async_main; + +/// Example on how to replicate a (disk) hypercore to another (memory) hypercore. +/// NB: The replication functions used here are low-level, built for use in the wire +/// protocol. +#[async_main] +async fn main() { + // For the purposes of this example, first create a + // temporary directory to hold hypercore. + let dir = Builder::new() + .prefix("examples_replication") + .tempdir() + .unwrap() + .into_path(); + + // Create a disk storage, overwriting existing values. + let overwrite = true; + let storage = Storage::new_disk(&dir, overwrite) + .await + .expect("Could not create disk storage"); + + // Build a new disk hypercore + let mut origin_hypercore = HypercoreBuilder::new(storage) + .build() + .await + .expect("Could not create disk hypercore"); + + // Append values to the hypercore + let batch: &[&[u8]] = &[b"Hello, ", b"from ", b"replicated ", b"hypercore!"]; + origin_hypercore.append_batch(batch).await.unwrap(); + + // Store the public key + let origin_public_key = origin_hypercore.key_pair().public; + + // Create a peer of the origin hypercore using the public key + let mut replicated_hypercore = HypercoreBuilder::new( + Storage::new_memory() + .await + .expect("Could not create memory storage"), + ) + .key_pair(PartialKeypair { + public: origin_public_key, + secret: None, + }) + .build() + .await + .expect("Could not create memory hypercore"); + + // Replicate the four values in random order + replicate_index(&mut origin_hypercore, &mut replicated_hypercore, 3).await; + replicate_index(&mut origin_hypercore, &mut replicated_hypercore, 0).await; + replicate_index(&mut origin_hypercore, &mut replicated_hypercore, 2).await; + replicate_index(&mut origin_hypercore, &mut replicated_hypercore, 1).await; + + // Print values from replicated hypercore, converting binary back to string + println!( + "{}{}{}{}", + format_res(replicated_hypercore.get(0).await), + format_res(replicated_hypercore.get(1).await), + format_res(replicated_hypercore.get(2).await), + format_res(replicated_hypercore.get(3).await) + ); // prints "Hello, from replicated hypercore!" +} + +async fn replicate_index( + origin_hypercore: &mut Hypercore, + replicated_hypercore: &mut Hypercore, + request_index: u64, +) { + let missing_nodes = origin_hypercore + .missing_nodes(request_index) + .await + .expect("Could not get missing nodes"); + let upgrade_start = replicated_hypercore.info().contiguous_length; + let upgrade_length = origin_hypercore.info().contiguous_length - upgrade_start; + + let proof = origin_hypercore + .create_proof( + Some(RequestBlock { + index: request_index, + nodes: missing_nodes, + }), + None, + None, + Some(RequestUpgrade { + start: upgrade_start, + length: upgrade_length, + }), + ) + .await + .expect("Creating proof error") + .expect("Could not get proof"); + // Then the proof is verified and applied to the replicated party. + assert!(replicated_hypercore + .verify_and_apply_proof(&proof) + .await + .expect("Verifying and applying proof failed")); +} + +fn format_res(res: Result>, HypercoreError>) -> String { + match res { + Ok(Some(bytes)) => String::from_utf8(bytes).expect("Shouldn't fail in example"), + Ok(None) => "Got None in feed".to_string(), + Err(e) => format!("Error getting value from feed, reason = {e:?}"), + } +} diff --git a/vendor/hypercore/src/bitfield/dynamic.rs b/vendor/hypercore/src/bitfield/dynamic.rs new file mode 100644 index 00000000..6c827c47 --- /dev/null +++ b/vendor/hypercore/src/bitfield/dynamic.rs @@ -0,0 +1,403 @@ +use super::fixed::{FixedBitfield, FIXED_BITFIELD_BITS_LENGTH, FIXED_BITFIELD_LENGTH}; +use crate::{ + common::{BitfieldUpdate, StoreInfo, StoreInfoInstruction, StoreInfoType}, + Store, +}; +use futures::future::Either; +use std::{cell::RefCell, convert::TryInto}; + +const DYNAMIC_BITFIELD_PAGE_SIZE: usize = 32768; + +/// Dynamic sized bitfield, uses a map of `FixedBitfield` elements. +/// See: +/// https://github.com/hypercore-protocol/hypercore/blob/master/lib/bitfield.js +/// for reference. +#[derive(Debug)] +pub(crate) struct DynamicBitfield { + pages: intmap::IntMap>, + biggest_page_index: u64, + unflushed: Vec, +} + +impl DynamicBitfield { + pub(crate) fn open(info: Option) -> Either { + match info { + None => Either::Left(StoreInfoInstruction::new_size(Store::Bitfield, 0)), + Some(info) => { + if info.info_type == StoreInfoType::Size { + let bitfield_store_length = info.length.unwrap(); + // Read only multiples of 4 bytes. + let length = bitfield_store_length - (bitfield_store_length & 3); + return Either::Left(StoreInfoInstruction::new_content( + Store::Bitfield, + 0, + length, + )); + } + let data = info.data.expect("Did not receive bitfield store content"); + let resumed = data.len() >= 4; + let mut biggest_page_index = 0; + if resumed { + let mut pages: intmap::IntMap> = intmap::IntMap::new(); + let mut data_index = 0; + while data_index < data.len() { + let parent_index: u64 = (data_index / FIXED_BITFIELD_LENGTH) as u64; + pages.insert( + parent_index, + RefCell::new(FixedBitfield::from_data(data_index, &data)), + ); + if parent_index > biggest_page_index { + biggest_page_index = parent_index; + } + data_index += FIXED_BITFIELD_LENGTH; + } + Either::Right(Self { + pages, + unflushed: vec![], + biggest_page_index, + }) + } else { + Either::Right(Self { + pages: intmap::IntMap::new(), + unflushed: vec![], + biggest_page_index, + }) + } + } + } + } + + /// Flushes pending changes, returns info slices to write to storage. + pub(crate) fn flush(&mut self) -> Box<[StoreInfo]> { + let mut infos_to_flush: Vec = Vec::with_capacity(self.unflushed.len()); + for unflushed_id in &self.unflushed { + let mut p = self.pages.get_mut(*unflushed_id).unwrap().borrow_mut(); + let data = p.to_bytes(); + infos_to_flush.push(StoreInfo::new_content( + Store::Bitfield, + *unflushed_id * data.len() as u64, + &data, + )); + p.dirty = false; + } + self.unflushed = vec![]; + infos_to_flush.into_boxed_slice() + } + + pub(crate) fn get(&self, index: u64) -> bool { + let j = index & (DYNAMIC_BITFIELD_PAGE_SIZE as u64 - 1); + let i = (index - j) / DYNAMIC_BITFIELD_PAGE_SIZE as u64; + + if !self.pages.contains_key(i) { + false + } else { + let p = self.pages.get(i).unwrap().borrow(); + p.get(j.try_into().expect("Index should have fit into u32")) + } + } + + #[allow(dead_code)] + pub(crate) fn set(&mut self, index: u64, value: bool) -> bool { + let j = index & (DYNAMIC_BITFIELD_PAGE_SIZE as u64 - 1); + let i = (index - j) / DYNAMIC_BITFIELD_PAGE_SIZE as u64; + + if !self.pages.contains_key(i) { + if value { + self.pages.insert(i, RefCell::new(FixedBitfield::new())); + if i > self.biggest_page_index { + self.biggest_page_index = i; + } + } else { + // The page does not exist, but when setting false, that doesn't matter + return false; + } + } + + let mut p = self.pages.get_mut(i).unwrap().borrow_mut(); + let changed: bool = p.set(j.try_into().expect("Index should have fit into u32"), value); + + if changed && !p.dirty { + p.dirty = true; + self.unflushed.push(i); + } + changed + } + + pub(crate) fn update(&mut self, bitfield_update: &BitfieldUpdate) { + self.set_range( + bitfield_update.start, + bitfield_update.length, + !bitfield_update.drop, + ) + } + + pub(crate) fn set_range(&mut self, start: u64, length: u64, value: bool) { + let mut j = start & (DYNAMIC_BITFIELD_PAGE_SIZE as u64 - 1); + let mut i = (start - j) / (DYNAMIC_BITFIELD_PAGE_SIZE as u64); + let mut length = length; + + while length > 0 { + if !self.pages.contains_key(i) { + self.pages.insert(i, RefCell::new(FixedBitfield::new())); + if i > self.biggest_page_index { + self.biggest_page_index = i; + } + } + let mut p = self.pages.get_mut(i).unwrap().borrow_mut(); + + let end = std::cmp::min(j + length, DYNAMIC_BITFIELD_PAGE_SIZE as u64); + + let range_start: u32 = j + .try_into() + .expect("Range start should have fit into a u32"); + let range_end: u32 = (end - j) + .try_into() + .expect("Range end should have fit into a u32"); + + let changed = p.set_range(range_start, range_end, value); + if changed && !p.dirty { + p.dirty = true; + self.unflushed.push(i); + } + + j = 0; + i += 1; + length -= range_end as u64; + } + } + + /// Finds the first index of the value after given position. Returns None if not found. + pub(crate) fn index_of(&self, value: bool, position: u64) -> Option { + let first_index = position & (DYNAMIC_BITFIELD_PAGE_SIZE as u64 - 1); + let first_page = (position - first_index) / (DYNAMIC_BITFIELD_PAGE_SIZE as u64); + + if value { + // For finding the first positive value, we only care about pages that are set, + // not pages that don't exist, as they can't possibly contain the value. + + // To keep the common case fast, first try the same page as the position + if let Some(p) = self.pages.get(first_page) { + if let Some(index) = p.borrow().index_of(value, first_index as u32) { + return Some(first_page * DYNAMIC_BITFIELD_PAGE_SIZE as u64 + index as u64); + }; + } + + // It wasn't found on the first page, now get the keys that are bigger + // than the given index and sort them. + let mut keys: Vec<&u64> = self.pages.keys().filter(|key| **key > first_page).collect(); + keys.sort(); + for key in keys { + if let Some(p) = self.pages.get(*key) { + if let Some(index) = p.borrow().index_of(value, 0) { + return Some(key * DYNAMIC_BITFIELD_PAGE_SIZE as u64 + index as u64); + }; + } + } + } else { + // Searching for the false value is easier as it is automatically hit on + // a missing page. + let mut i = first_page; + let mut j = first_index as u32; + while i == first_page || i <= self.biggest_page_index { + if let Some(p) = self.pages.get(i) { + if let Some(index) = p.borrow().index_of(value, j) { + return Some(i * DYNAMIC_BITFIELD_PAGE_SIZE as u64 + index as u64); + }; + } else { + return Some(i * DYNAMIC_BITFIELD_PAGE_SIZE as u64 + j as u64); + } + i += 1; + j = 0; // We start at the beginning of each page + } + } + None + } + + /// Finds the last index of the value before given position. Returns None if not found. + pub(crate) fn last_index_of(&self, value: bool, position: u64) -> Option { + let last_index = position & (DYNAMIC_BITFIELD_PAGE_SIZE as u64 - 1); + let last_page = (position - last_index) / (DYNAMIC_BITFIELD_PAGE_SIZE as u64); + + if value { + // For finding the last positive value, we only care about pages that are set, + // not pages that don't exist, as they can't possibly contain the value. + + // To keep the common case fast, first try the same page as the position + if let Some(p) = self.pages.get(last_page) { + if let Some(index) = p.borrow().last_index_of(value, last_index as u32) { + return Some(last_page * DYNAMIC_BITFIELD_PAGE_SIZE as u64 + index as u64); + }; + } + + // It wasn't found on the last page, now get the keys that are smaller + // than the given index and sort them. + let mut keys: Vec<&u64> = self.pages.keys().filter(|key| **key < last_page).collect(); + keys.sort(); + keys.reverse(); + + for key in keys { + if let Some(p) = self.pages.get(*key) { + if let Some(index) = p + .borrow() + .last_index_of(value, FIXED_BITFIELD_BITS_LENGTH as u32 - 1) + { + return Some(key * DYNAMIC_BITFIELD_PAGE_SIZE as u64 + index as u64); + }; + } + } + } else { + // Searching for the false value is easier as it is automatically hit on + // a missing page. + let mut i = last_page; + let mut j = last_index as u32; + while i == last_page || i == 0 { + if let Some(p) = self.pages.get(i) { + if let Some(index) = p.borrow().last_index_of(value, j) { + return Some(i * DYNAMIC_BITFIELD_PAGE_SIZE as u64 + index as u64); + }; + } else { + return Some(i * DYNAMIC_BITFIELD_PAGE_SIZE as u64 + j as u64); + } + i -= 1; + j = FIXED_BITFIELD_BITS_LENGTH as u32 - 1; // We start at end of each page + } + } + + None + } +} + +#[cfg(test)] +mod tests { + use super::*; + + fn assert_value_range(bitfield: &DynamicBitfield, start: u64, length: u64, value: bool) { + for i in start..start + length { + assert_eq!(bitfield.get(i), value); + } + } + + fn get_dynamic_bitfield() -> DynamicBitfield { + match DynamicBitfield::open(Some(StoreInfo::new_content(Store::Bitfield, 0, &[]))) { + Either::Left(_) => panic!("Could not open bitfield"), + Either::Right(bitfield) => bitfield, + } + } + + #[test] + fn bitfield_dynamic_get_and_set() { + let mut bitfield = get_dynamic_bitfield(); + assert_value_range(&bitfield, 0, 9, false); + assert_eq!(bitfield.index_of(true, 0), None); + assert_eq!(bitfield.index_of(false, 0), Some(0)); + assert_eq!(bitfield.last_index_of(true, 9), None); + assert_eq!(bitfield.last_index_of(false, 9), Some(9)); + assert_eq!(bitfield.index_of(true, 10000000), None); + assert_eq!(bitfield.index_of(false, 10000000), Some(10000000)); + assert_eq!(bitfield.last_index_of(true, 10000000), None); + assert_eq!(bitfield.last_index_of(false, 10000000), Some(10000000)); + + bitfield.set(0, true); + assert!(bitfield.get(0)); + assert_eq!(bitfield.index_of(true, 0), Some(0)); + assert_eq!(bitfield.index_of(false, 0), Some(1)); + assert_eq!(bitfield.last_index_of(true, 9), Some(0)); + assert_eq!(bitfield.last_index_of(false, 9), Some(9)); + assert_eq!(bitfield.last_index_of(true, 10000000), Some(0)); + assert_eq!(bitfield.last_index_of(false, 10000000), Some(10000000)); + + assert_value_range(&bitfield, 1, 63, false); + bitfield.set(31, true); + assert!(bitfield.get(31)); + + assert_value_range(&bitfield, 32, 32, false); + assert!(!bitfield.get(32)); + bitfield.set(32, true); + assert!(bitfield.get(32)); + assert_value_range(&bitfield, 33, 31, false); + + assert_value_range(&bitfield, 32760, 8, false); + assert!(!bitfield.get(32767)); + bitfield.set(32767, true); + assert!(bitfield.get(32767)); + assert_value_range(&bitfield, 32760, 7, false); + + // Now for over one fixed bitfield values + bitfield.set(32768, true); + assert_value_range(&bitfield, 32767, 2, true); + assert_value_range(&bitfield, 32769, 9, false); + + bitfield.set(10000000, true); + assert!(bitfield.get(10000000)); + assert_value_range(&bitfield, 9999990, 10, false); + assert_value_range(&bitfield, 10000001, 9, false); + assert_eq!(bitfield.index_of(false, 32767), Some(32769)); + assert_eq!(bitfield.index_of(true, 32769), Some(10000000)); + assert_eq!(bitfield.last_index_of(true, 9999999), Some(32768)); + } + + #[test] + fn bitfield_dynamic_set_range() { + let mut bitfield = get_dynamic_bitfield(); + bitfield.set_range(0, 2, true); + assert_value_range(&bitfield, 0, 2, true); + assert_value_range(&bitfield, 3, 61, false); + + bitfield.set_range(2, 3, true); + assert_value_range(&bitfield, 0, 5, true); + assert_value_range(&bitfield, 5, 59, false); + + bitfield.set_range(1, 3, false); + assert!(bitfield.get(0)); + assert_value_range(&bitfield, 1, 3, false); + assert_value_range(&bitfield, 4, 1, true); + assert_value_range(&bitfield, 5, 59, false); + + bitfield.set_range(30, 30070, true); + assert_value_range(&bitfield, 5, 25, false); + assert_value_range(&bitfield, 30, 100, true); + assert_value_range(&bitfield, 30050, 50, true); + assert_value_range(&bitfield, 31000, 50, false); + + bitfield.set_range(32750, 18, true); + assert_value_range(&bitfield, 32750, 18, true); + + bitfield.set_range(32765, 3, false); + assert_value_range(&bitfield, 32750, 15, true); + assert_value_range(&bitfield, 32765, 3, false); + + // Now for over one fixed bitfield values + bitfield.set_range(32765, 15, true); + assert_value_range(&bitfield, 32765, 15, true); + assert_value_range(&bitfield, 32780, 9, false); + bitfield.set_range(32766, 3, false); + assert_value_range(&bitfield, 32766, 3, false); + + bitfield.set_range(10000000, 50, true); + assert_value_range(&bitfield, 9999990, 9, false); + assert_value_range(&bitfield, 10000050, 9, false); + assert_eq!(bitfield.index_of(true, 32780), Some(10000000)); + bitfield.set_range(0, 32780, false); + // Manufacture empty pages to test sorting + bitfield.set(900000, true); + bitfield.set(900000, false); + bitfield.set(300000, true); + bitfield.set(300000, false); + bitfield.set(200000, true); + bitfield.set(200000, false); + bitfield.set(500000, true); + bitfield.set(500000, false); + bitfield.set(100000, true); + bitfield.set(100000, false); + bitfield.set(700000, true); + bitfield.set(700000, false); + assert_eq!(bitfield.index_of(true, 0), Some(10000000)); + assert_eq!(bitfield.last_index_of(true, 9999999), None); + + bitfield.set_range(10000010, 10, false); + assert_value_range(&bitfield, 10000000, 10, true); + assert_value_range(&bitfield, 10000010, 10, false); + assert_value_range(&bitfield, 10000020, 30, true); + assert_value_range(&bitfield, 10000050, 9, false); + } +} diff --git a/vendor/hypercore/src/bitfield/fixed.rs b/vendor/hypercore/src/bitfield/fixed.rs new file mode 100644 index 00000000..57ad3b41 --- /dev/null +++ b/vendor/hypercore/src/bitfield/fixed.rs @@ -0,0 +1,228 @@ +pub(crate) const FIXED_BITFIELD_LENGTH: usize = 1024; +pub(crate) const FIXED_BITFIELD_BYTES_LENGTH: usize = FIXED_BITFIELD_LENGTH * 4; +pub(crate) const FIXED_BITFIELD_BITS_LENGTH: usize = FIXED_BITFIELD_BYTES_LENGTH * 8; +// u32 has 4 bytes and a byte has 8 bits +const FIXED_BITFIELD_BITS_PER_ELEM: u32 = 4 * 8; + +use std::convert::TryInto; + +/// Fixed size bitfield +/// see: +/// https://github.com/holepunchto/bits-to-bytes/blob/main/index.js +/// for implementations. +/// TODO: This has been split into segments on the Javascript side "for improved disk performance": +/// https://github.com/hypercore-protocol/hypercore/commit/6392021b11d53041a446e9021c7d79350a052d3d +#[derive(Debug)] +pub(crate) struct FixedBitfield { + pub(crate) dirty: bool, + bitfield: [u32; FIXED_BITFIELD_LENGTH], +} + +impl FixedBitfield { + pub(crate) fn new() -> Self { + Self { + dirty: false, + bitfield: [0; FIXED_BITFIELD_LENGTH], + } + } + + pub(crate) fn from_data(data_index: usize, data: &[u8]) -> Self { + let mut bitfield = [0; FIXED_BITFIELD_LENGTH]; + if data.len() >= data_index + 4 { + let mut i = data_index; + let limit = std::cmp::min(data_index + FIXED_BITFIELD_BYTES_LENGTH, data.len()) - 4; + while i <= limit { + let value: u32 = (data[i] as u32) + | ((data[i + 1] as u32) << 8) + | ((data[i + 2] as u32) << 16) + | ((data[i + 3] as u32) << 24); + bitfield[i / 4] = value; + i += 4; + } + } + Self { + dirty: false, + bitfield, + } + } + + pub(crate) fn to_bytes(&self) -> Box<[u8]> { + let mut data: [u8; FIXED_BITFIELD_BYTES_LENGTH] = [0; FIXED_BITFIELD_BYTES_LENGTH]; + let mut i = 0; + for elem in self.bitfield { + let bytes = &elem.to_le_bytes(); + data[i] = bytes[0]; + data[i + 1] = bytes[1]; + data[i + 2] = bytes[2]; + data[i + 3] = bytes[3]; + i += 4; + } + data.into() + } + + pub(crate) fn get(&self, index: u32) -> bool { + let n = FIXED_BITFIELD_BITS_PER_ELEM; + let offset = index & (n - 1); + let i: usize = ((index - offset) / n) + .try_into() + .expect("Could not fit 64 bit integer to usize on this architecture"); + self.bitfield[i] & (1 << offset) != 0 + } + + pub(crate) fn set(&mut self, index: u32, value: bool) -> bool { + let n = FIXED_BITFIELD_BITS_PER_ELEM; + let offset = index & (n - 1); + let i: usize = ((index - offset) / n) + .try_into() + .expect("Could not fit 64 bit integer to usize on this architecture"); + let mask = 1 << offset; + + if value { + if (self.bitfield[i] & mask) != 0 { + return false; + } + } else if (self.bitfield[i] & mask) == 0 { + return false; + } + self.bitfield[i] ^= mask; + true + } + + pub(crate) fn set_range(&mut self, start: u32, length: u32, value: bool) -> bool { + let end: u32 = start + length; + let n = FIXED_BITFIELD_BITS_PER_ELEM; + + let mut remaining: i64 = end as i64 - start as i64; + let mut offset = start & (n - 1); + let mut i: usize = ((start - offset) / n).try_into().unwrap(); + + let mut changed = false; + + while remaining > 0 { + let base: u32 = 2; + let power: u32 = std::cmp::min(remaining, (n - offset).into()) + .try_into() + .unwrap(); + let mask_seed = if power == 32 { + // Go directly to this maximum value as the below + // calculation overflows as 1 is subtracted after + // the power. + u32::MAX + } else { + base.pow(power) - 1 + }; + let mask: u32 = mask_seed << offset; + + if value { + if (self.bitfield[i] & mask) != mask { + self.bitfield[i] |= mask; + changed = true; + } + } else if (self.bitfield[i] & mask) != 0 { + self.bitfield[i] &= !mask; + changed = true; + } + + remaining -= (n - offset) as i64; + offset = 0; + i += 1; + } + + changed + } + + /// Finds the first index of the value after given position. Returns None if not found. + pub(crate) fn index_of(&self, value: bool, position: u32) -> Option { + (position..FIXED_BITFIELD_BITS_LENGTH as u32).find(|&i| self.get(i) == value) + } + + /// Finds the last index of the value before given position. Returns None if not found. + pub(crate) fn last_index_of(&self, value: bool, position: u32) -> Option { + (0..position + 1).rev().find(|&i| self.get(i) == value) + } +} + +#[cfg(test)] +mod tests { + use super::*; + + fn assert_value_range(bitfield: &FixedBitfield, start: u32, length: u32, value: bool) { + for i in start..start + length { + assert_eq!(bitfield.get(i), value); + } + } + + #[test] + fn bitfield_fixed_get_and_set() { + let mut bitfield = FixedBitfield::new(); + assert_value_range(&bitfield, 0, 9, false); + assert_eq!(bitfield.index_of(true, 0), None); + assert_eq!(bitfield.index_of(false, 0), Some(0)); + assert_eq!(bitfield.last_index_of(true, 9), None); + assert_eq!(bitfield.last_index_of(false, 9), Some(9)); + + bitfield.set(0, true); + assert!(bitfield.get(0)); + assert_eq!(bitfield.index_of(true, 0), Some(0)); + assert_eq!(bitfield.index_of(false, 0), Some(1)); + assert_eq!(bitfield.last_index_of(true, 9), Some(0)); + assert_eq!(bitfield.last_index_of(false, 9), Some(9)); + assert_eq!(bitfield.last_index_of(false, 0), None); + + assert_value_range(&bitfield, 1, 63, false); + bitfield.set(31, true); + assert!(bitfield.get(31)); + assert_eq!(bitfield.index_of(true, 1), Some(31)); + assert_eq!(bitfield.index_of(false, 31), Some(32)); + + assert_value_range(&bitfield, 32, 32, false); + assert!(!bitfield.get(32)); + bitfield.set(32, true); + assert!(bitfield.get(32)); + assert_value_range(&bitfield, 33, 31, false); + + assert_value_range(&bitfield, 32760, 8, false); + assert!(!bitfield.get(32767)); + bitfield.set(32767, true); + assert!(bitfield.get(32767)); + assert_value_range(&bitfield, 32760, 7, false); + assert_eq!(bitfield.index_of(true, 33), Some(32767)); + assert_eq!(bitfield.last_index_of(true, 9), Some(0)); + assert_eq!(bitfield.last_index_of(true, 32766), Some(32)); + } + + #[test] + fn bitfield_fixed_set_range() { + let mut bitfield = FixedBitfield::new(); + bitfield.set_range(0, 2, true); + assert_value_range(&bitfield, 0, 2, true); + assert_value_range(&bitfield, 3, 61, false); + + bitfield.set_range(2, 3, true); + assert_value_range(&bitfield, 0, 5, true); + assert_value_range(&bitfield, 5, 59, false); + + bitfield.set_range(1, 3, false); + assert!(bitfield.get(0)); + assert_value_range(&bitfield, 1, 3, false); + assert_value_range(&bitfield, 4, 1, true); + assert_value_range(&bitfield, 5, 59, false); + + bitfield.set_range(30, 30070, true); + assert_value_range(&bitfield, 5, 25, false); + assert_value_range(&bitfield, 30, 100, true); + assert_value_range(&bitfield, 30050, 50, true); + assert_value_range(&bitfield, 31000, 50, false); + assert_eq!(bitfield.index_of(true, 20), Some(30)); + assert_eq!(bitfield.index_of(false, 30), Some(30100)); + assert_eq!(bitfield.last_index_of(true, 32000), Some(30099)); + assert_eq!(bitfield.last_index_of(false, 30099), Some(29)); + + bitfield.set_range(32750, 18, true); + assert_value_range(&bitfield, 32750, 18, true); + + bitfield.set_range(32765, 3, false); + assert_value_range(&bitfield, 32750, 15, true); + assert_value_range(&bitfield, 32765, 3, false); + } +} diff --git a/vendor/hypercore/src/bitfield/mod.rs b/vendor/hypercore/src/bitfield/mod.rs new file mode 100644 index 00000000..9daa246c --- /dev/null +++ b/vendor/hypercore/src/bitfield/mod.rs @@ -0,0 +1,4 @@ +mod dynamic; +mod fixed; + +pub(crate) use dynamic::DynamicBitfield as Bitfield; diff --git a/vendor/hypercore/src/builder.rs b/vendor/hypercore/src/builder.rs new file mode 100644 index 00000000..4e18dad2 --- /dev/null +++ b/vendor/hypercore/src/builder.rs @@ -0,0 +1,100 @@ +use random_access_storage::RandomAccess; +use std::fmt::Debug; +#[cfg(feature = "cache")] +use std::time::Duration; +use tracing::instrument; + +#[cfg(feature = "cache")] +use crate::common::cache::CacheOptions; +use crate::{core::HypercoreOptions, Hypercore, HypercoreError, PartialKeypair, Storage}; + +/// Build CacheOptions. +#[cfg(feature = "cache")] +#[derive(Debug)] +pub struct CacheOptionsBuilder(CacheOptions); + +#[cfg(feature = "cache")] +impl Default for CacheOptionsBuilder { + fn default() -> Self { + Self::new() + } +} + +#[cfg(feature = "cache")] +impl CacheOptionsBuilder { + /// Create a CacheOptions builder with default options + pub fn new() -> Self { + Self(CacheOptions::new()) + } + + /// Set cache time to live. + pub fn time_to_live(mut self, time_to_live: Duration) -> Self { + self.0.time_to_live = Some(time_to_live); + self + } + + /// Set cache time to idle. + pub fn time_to_idle(mut self, time_to_idle: Duration) -> Self { + self.0.time_to_idle = Some(time_to_idle); + self + } + + /// Set cache max capacity in bytes. + pub fn max_capacity(mut self, max_capacity: u64) -> Self { + self.0.max_capacity = Some(max_capacity); + self + } + + /// Build new cache options. + pub(crate) fn build(self) -> CacheOptions { + self.0 + } +} + +/// Build a Hypercore instance with options. +#[derive(Debug)] +pub struct HypercoreBuilder +where + T: RandomAccess + Debug + Send, +{ + storage: Storage, + options: HypercoreOptions, +} + +impl HypercoreBuilder +where + T: RandomAccess + Debug + Send, +{ + /// Create a hypercore builder with a given storage + pub fn new(storage: Storage) -> Self { + Self { + storage, + options: HypercoreOptions::new(), + } + } + + /// Set key pair. + pub fn key_pair(mut self, key_pair: PartialKeypair) -> Self { + self.options.key_pair = Some(key_pair); + self + } + + /// Set open. + pub fn open(mut self, open: bool) -> Self { + self.options.open = open; + self + } + + /// Set node cache options. + #[cfg(feature = "cache")] + pub fn node_cache_options(mut self, builder: CacheOptionsBuilder) -> Self { + self.options.node_cache_options = Some(builder.build()); + self + } + + /// Build a new Hypercore. + #[instrument(err, skip_all)] + pub async fn build(self) -> Result, HypercoreError> { + Hypercore::new(self.storage, self.options).await + } +} diff --git a/vendor/hypercore/src/common/cache.rs b/vendor/hypercore/src/common/cache.rs new file mode 100644 index 00000000..fc6a4961 --- /dev/null +++ b/vendor/hypercore/src/common/cache.rs @@ -0,0 +1,58 @@ +use moka::sync::Cache; +use std::time::Duration; + +use crate::Node; + +// Default to 1 year of cache +const DEFAULT_CACHE_TTL_SEC: u64 = 31556952; +const DEFAULT_CACHE_TTI_SEC: u64 = 31556952; +// Default to 100kb of node cache +const DEFAULT_CACHE_MAX_SIZE: u64 = 100000; +const NODE_WEIGHT: u32 = + // Byte size of a Node based on the fields. + 3 * 8 + 32 + 4 + + // Then 8 for key and guesstimate 8 bytes of overhead. + 8 + 8; + +#[derive(Debug, Clone)] +pub(crate) struct CacheOptions { + pub(crate) time_to_live: Option, + pub(crate) time_to_idle: Option, + pub(crate) max_capacity: Option, +} + +impl CacheOptions { + pub(crate) fn new() -> Self { + Self { + time_to_live: None, + time_to_idle: None, + max_capacity: None, + } + } + + pub(crate) fn to_node_cache(&self, initial_nodes: Vec) -> Cache { + let cache = if self.time_to_live.is_some() || self.time_to_idle.is_some() { + Cache::builder() + .time_to_live( + self.time_to_live + .unwrap_or_else(|| Duration::from_secs(DEFAULT_CACHE_TTL_SEC)), + ) + .time_to_idle( + self.time_to_idle + .unwrap_or_else(|| Duration::from_secs(DEFAULT_CACHE_TTI_SEC)), + ) + .max_capacity(self.max_capacity.unwrap_or(DEFAULT_CACHE_MAX_SIZE)) + .weigher(|_, _| NODE_WEIGHT) + .build() + } else { + Cache::builder() + .max_capacity(self.max_capacity.unwrap_or(DEFAULT_CACHE_MAX_SIZE)) + .weigher(|_, _| NODE_WEIGHT) + .build() + }; + for node in initial_nodes { + cache.insert(node.index, node); + } + cache + } +} diff --git a/vendor/hypercore/src/common/error.rs b/vendor/hypercore/src/common/error.rs new file mode 100644 index 00000000..89ec0b37 --- /dev/null +++ b/vendor/hypercore/src/common/error.rs @@ -0,0 +1,78 @@ +use compact_encoding::EncodingError; +use thiserror::Error; + +use crate::Store; + +/// Common error type for the hypercore interface +#[derive(Error, Debug)] +pub enum HypercoreError { + /// Bad argument + #[error("Bad argument. {context}")] + BadArgument { + /// Context for the error + context: String, + }, + /// Not writable + #[error("Hypercore not writable")] + NotWritable, + /// Invalid signature + #[error("Given signature was invalid. {context}")] + InvalidSignature { + /// Context for the error + context: String, + }, + /// Invalid checksum + #[error("Invalid checksum. {context}")] + InvalidChecksum { + /// Context for the error + context: String, + }, + /// Empty storage + #[error("Empty storage: {store}.")] + EmptyStorage { + /// Store that was found empty + store: Store, + }, + /// Corrupt storage + #[error("Corrupt storage: {store}.{}", + .context.as_ref().map_or_else(String::new, |ctx| format!(" Context: {ctx}.")))] + CorruptStorage { + /// Store that was corrupt + store: Store, + /// Context for the error + context: Option, + }, + /// Invalid operation + #[error("Invalid operation. {context}")] + InvalidOperation { + /// Context for the error + context: String, + }, + /// Unexpected IO error occured + #[error("Unrecoverable input/output error occured.{}", + .context.as_ref().map_or_else(String::new, |ctx| format!(" {ctx}.")))] + IO { + /// Context for the error + context: Option, + /// Original source error + #[source] + source: std::io::Error, + }, +} + +impl From for HypercoreError { + fn from(err: std::io::Error) -> Self { + Self::IO { + context: None, + source: err, + } + } +} + +impl From for HypercoreError { + fn from(err: EncodingError) -> Self { + Self::InvalidOperation { + context: format!("Encoding failed: {err}"), + } + } +} diff --git a/vendor/hypercore/src/common/mod.rs b/vendor/hypercore/src/common/mod.rs new file mode 100644 index 00000000..f5fb6baf --- /dev/null +++ b/vendor/hypercore/src/common/mod.rs @@ -0,0 +1,23 @@ +#[cfg(feature = "cache")] +pub(crate) mod cache; +mod error; +mod node; +mod peer; +mod store; + +pub use self::error::HypercoreError; +pub use self::node::Node; +pub(crate) use self::node::NodeByteRange; +pub(crate) use self::peer::ValuelessProof; +pub use self::peer::{ + DataBlock, DataHash, DataSeek, DataUpgrade, Proof, RequestBlock, RequestSeek, RequestUpgrade, +}; +pub use self::store::Store; +pub(crate) use self::store::{StoreInfo, StoreInfoInstruction, StoreInfoType}; + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct BitfieldUpdate { + pub(crate) drop: bool, + pub(crate) start: u64, + pub(crate) length: u64, +} diff --git a/vendor/hypercore/src/common/node.rs b/vendor/hypercore/src/common/node.rs new file mode 100644 index 00000000..7e339d37 --- /dev/null +++ b/vendor/hypercore/src/common/node.rs @@ -0,0 +1,148 @@ +use merkle_tree_stream::Node as NodeTrait; +use merkle_tree_stream::{NodeKind, NodeParts}; +use pretty_hash::fmt as pretty_fmt; +use std::cmp::Ordering; +use std::convert::AsRef; +use std::fmt::{self, Display}; + +use crate::crypto::Hash; + +/// Node byte range +#[derive(Debug, Clone, PartialEq, Eq)] +pub(crate) struct NodeByteRange { + pub(crate) index: u64, + pub(crate) length: u64, +} + +/// Nodes that are persisted to disk. +// TODO: replace `hash: Vec` with `hash: Hash`. This requires patching / +// rewriting the Blake2b crate to support `.from_bytes()` to serialize from +// disk. +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct Node { + pub(crate) index: u64, + pub(crate) hash: Vec, + pub(crate) length: u64, + pub(crate) parent: u64, + pub(crate) data: Option>, + pub(crate) blank: bool, +} + +impl Node { + /// Create a new instance. + // TODO: ensure sizes are correct. + pub fn new(index: u64, hash: Vec, length: u64) -> Self { + let mut blank = true; + for byte in &hash { + if *byte != 0 { + blank = false; + break; + } + } + Self { + index, + hash, + length, + parent: flat_tree::parent(index), + data: Some(Vec::with_capacity(0)), + blank, + } + } + + /// Creates a new blank node + pub fn new_blank(index: u64) -> Self { + Self { + index, + hash: vec![0, 32], + length: 0, + parent: 0, + data: None, + blank: true, + } + } +} + +impl NodeTrait for Node { + #[inline] + fn index(&self) -> u64 { + self.index + } + + #[inline] + fn hash(&self) -> &[u8] { + &self.hash + } + + #[inline] + fn len(&self) -> u64 { + self.length + } + + #[inline] + fn is_empty(&self) -> bool { + self.length == 0 + } + + #[inline] + fn parent(&self) -> u64 { + self.parent + } +} + +impl AsRef for Node { + #[inline] + fn as_ref(&self) -> &Self { + self + } +} + +impl Display for Node { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + write!( + f, + "Node {{ index: {}, hash: {}, length: {} }}", + self.index, + pretty_fmt(&self.hash).unwrap(), + self.length + ) + } +} + +impl PartialOrd for Node { + fn partial_cmp(&self, other: &Self) -> Option { + Some(self.index.cmp(&other.index)) + } +} + +impl Ord for Node { + fn cmp(&self, other: &Self) -> Ordering { + self.index.cmp(&other.index) + } +} + +impl From> for Node { + fn from(parts: NodeParts) -> Self { + let partial = parts.node(); + let data = match partial.data() { + NodeKind::Leaf(data) => Some(data.clone()), + NodeKind::Parent => None, + }; + let hash: Vec = parts.hash().as_bytes().into(); + let mut blank = true; + for byte in &hash { + if *byte != 0 { + blank = false; + break; + } + } + + Node { + index: partial.index(), + parent: partial.parent, + length: partial.len(), + hash, + data, + blank, + } + } +} diff --git a/vendor/hypercore/src/common/peer.rs b/vendor/hypercore/src/common/peer.rs new file mode 100644 index 00000000..c71b9818 --- /dev/null +++ b/vendor/hypercore/src/common/peer.rs @@ -0,0 +1,117 @@ +//! Types needed for passing information with with peers. +//! hypercore-protocol-rs uses these types and wraps them +//! into wire messages. +use crate::Node; + +#[derive(Debug, Clone, PartialEq)] +/// Request of a DataBlock or DataHash from peer +pub struct RequestBlock { + /// Hypercore index + pub index: u64, + /// TODO: document + pub nodes: u64, +} + +#[derive(Debug, Clone, PartialEq)] +/// Request of a DataSeek from peer +pub struct RequestSeek { + /// TODO: document + pub bytes: u64, +} + +#[derive(Debug, Clone, PartialEq)] +/// Request of a DataUpgrade from peer +pub struct RequestUpgrade { + /// Hypercore start index + pub start: u64, + /// Length of elements + pub length: u64, +} + +#[derive(Debug, Clone, PartialEq)] +/// Proof generated from corresponding requests +pub struct Proof { + /// Fork + pub fork: u64, + /// Data block. + pub block: Option, + /// Data hash + pub hash: Option, + /// Data seek + pub seek: Option, + /// Data updrade + pub upgrade: Option, +} + +#[derive(Debug, Clone, PartialEq)] +/// Valueless proof generated from corresponding requests +pub(crate) struct ValuelessProof { + pub(crate) fork: u64, + /// Data block. NB: The ValuelessProof struct uses the Hash type because + /// the stored binary value is processed externally to the proof. + pub(crate) block: Option, + pub(crate) hash: Option, + pub(crate) seek: Option, + pub(crate) upgrade: Option, +} + +impl ValuelessProof { + pub(crate) fn into_proof(mut self, block_value: Option>) -> Proof { + let block = self.block.take().map(|block| DataBlock { + index: block.index, + nodes: block.nodes, + value: block_value.expect("Data block needs to be given"), + }); + Proof { + fork: self.fork, + block, + hash: self.hash.take(), + seek: self.seek.take(), + upgrade: self.upgrade.take(), + } + } +} + +#[derive(Debug, Clone, PartialEq)] +/// Block of data to peer +pub struct DataBlock { + /// Hypercore index + pub index: u64, + /// Data block value in bytes + pub value: Vec, + /// TODO: document + pub nodes: Vec, +} + +#[derive(Debug, Clone, PartialEq)] +/// Data hash to peer +pub struct DataHash { + /// Hypercore index + pub index: u64, + /// TODO: document + pub nodes: Vec, +} + +#[derive(Debug, Clone, PartialEq)] +/// TODO: Document +pub struct DataSeek { + /// TODO: Document + pub bytes: u64, + /// TODO: Document + pub nodes: Vec, +} + +#[derive(Debug, Clone, PartialEq)] +/// TODO: Document +pub struct DataUpgrade { + /// TODO: Document + pub start: u64, + /// TODO: Document + pub length: u64, + /// TODO: Document + pub nodes: Vec, + /// TODO: Document + pub additional_nodes: Vec, + /// TODO: Document + pub signature: Vec, +} diff --git a/vendor/hypercore/src/common/store.rs b/vendor/hypercore/src/common/store.rs new file mode 100644 index 00000000..357ebc03 --- /dev/null +++ b/vendor/hypercore/src/common/store.rs @@ -0,0 +1,155 @@ +/// The types of stores that can be created. +#[derive(Debug, Clone, PartialEq)] +pub enum Store { + /// Tree + Tree, + /// Data (block store) + Data, + /// Bitfield + Bitfield, + /// Oplog + Oplog, +} + +impl std::fmt::Display for Store { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + Store::Tree => write!(f, "tree"), + Store::Data => write!(f, "data"), + Store::Bitfield => write!(f, "bitfield"), + Store::Oplog => write!(f, "oplog"), + } + } +} + +/// Information type about a store. +#[derive(Debug, PartialEq)] +pub(crate) enum StoreInfoType { + /// Read/write content of the store + Content, + /// Size in bytes of the store. When flushed, truncates to the given index. `data` is `None`. + Size, +} + +/// Piece of information about a store. Useful for indicating changes that should be made to random +/// access storages or information read from them. +#[derive(Debug)] +pub(crate) struct StoreInfo { + pub(crate) store: Store, + pub(crate) info_type: StoreInfoType, + pub(crate) index: u64, + pub(crate) length: Option, + pub(crate) data: Option>, + /// When reading, indicates missing value (can be true only if allow_miss is given as instruction). + /// When writing indicates that the value should be dropped. + pub(crate) miss: bool, +} + +impl StoreInfo { + pub(crate) fn new_content(store: Store, index: u64, data: &[u8]) -> Self { + Self { + store, + info_type: StoreInfoType::Content, + index, + length: Some(data.len() as u64), + data: Some(data.into()), + miss: false, + } + } + + pub(crate) fn new_content_miss(store: Store, index: u64) -> Self { + Self { + store, + info_type: StoreInfoType::Content, + index, + length: None, + data: None, + miss: true, + } + } + + pub(crate) fn new_delete(store: Store, index: u64, length: u64) -> Self { + Self { + store, + info_type: StoreInfoType::Content, + index, + length: Some(length), + data: None, + miss: true, + } + } + + pub(crate) fn new_truncate(store: Store, index: u64) -> Self { + Self { + store, + info_type: StoreInfoType::Size, + index, + length: None, + data: None, + miss: true, + } + } + + pub(crate) fn new_size(store: Store, index: u64, length: u64) -> Self { + Self { + store, + info_type: StoreInfoType::Size, + index, + length: Some(length), + data: None, + miss: false, + } + } +} + +/// Represents an instruction to obtain information about a store. +#[derive(Debug)] +pub(crate) struct StoreInfoInstruction { + pub(crate) store: Store, + pub(crate) info_type: StoreInfoType, + pub(crate) index: u64, + pub(crate) length: Option, + pub(crate) allow_miss: bool, +} + +impl StoreInfoInstruction { + pub(crate) fn new_content(store: Store, index: u64, length: u64) -> Self { + Self { + store, + info_type: StoreInfoType::Content, + index, + length: Some(length), + allow_miss: false, + } + } + + pub(crate) fn new_content_allow_miss(store: Store, index: u64, length: u64) -> Self { + Self { + store, + info_type: StoreInfoType::Content, + index, + length: Some(length), + allow_miss: true, + } + } + + pub(crate) fn new_all_content(store: Store) -> Self { + Self { + store, + info_type: StoreInfoType::Content, + index: 0, + length: None, + allow_miss: false, + } + } + + pub(crate) fn new_size(store: Store, index: u64) -> Self { + Self { + store, + info_type: StoreInfoType::Size, + index, + length: None, + allow_miss: false, + } + } +} diff --git a/vendor/hypercore/src/core.rs b/vendor/hypercore/src/core.rs new file mode 100644 index 00000000..fe49e9a2 --- /dev/null +++ b/vendor/hypercore/src/core.rs @@ -0,0 +1,1136 @@ +//! Hypercore's main abstraction. Exposes an append-only, secure log structure. +use ed25519_dalek::Signature; +use futures::future::Either; +use random_access_storage::RandomAccess; +use std::convert::TryFrom; +use std::fmt::Debug; +use tracing::instrument; + +#[cfg(feature = "cache")] +use crate::common::cache::CacheOptions; +use crate::{ + bitfield::Bitfield, + common::{BitfieldUpdate, HypercoreError, NodeByteRange, Proof, StoreInfo, ValuelessProof}, + crypto::{generate_signing_key, PartialKeypair}, + data::BlockStore, + oplog::{Header, Oplog, MAX_OPLOG_ENTRIES_BYTE_SIZE}, + storage::Storage, + tree::{MerkleTree, MerkleTreeChangeset}, + RequestBlock, RequestSeek, RequestUpgrade, +}; + +#[derive(Debug)] +pub(crate) struct HypercoreOptions { + pub(crate) key_pair: Option, + pub(crate) open: bool, + #[cfg(feature = "cache")] + pub(crate) node_cache_options: Option, +} + +impl HypercoreOptions { + pub(crate) fn new() -> Self { + Self { + key_pair: None, + open: false, + #[cfg(feature = "cache")] + node_cache_options: None, + } + } +} + +/// Hypercore is an append-only log structure. +#[derive(Debug)] +pub struct Hypercore +where + T: RandomAccess + Debug, +{ + pub(crate) key_pair: PartialKeypair, + pub(crate) storage: Storage, + pub(crate) oplog: Oplog, + pub(crate) tree: MerkleTree, + pub(crate) block_store: BlockStore, + pub(crate) bitfield: Bitfield, + skip_flush_count: u8, // autoFlush in Javascript + header: Header, +} + +/// Response from append, matches that of the Javascript result +#[derive(Debug)] +pub struct AppendOutcome { + /// Length of the hypercore after append + pub length: u64, + /// Byte length of the hypercore after append + pub byte_length: u64, +} + +/// Info about the hypercore +#[derive(Debug)] +pub struct Info { + /// Length of the hypercore + pub length: u64, + /// Byte length of the hypercore + pub byte_length: u64, + /// Continuous length of entries in the hypercore with data + /// starting from index 0 + pub contiguous_length: u64, + /// Fork index. 0 if hypercore not forked. + pub fork: u64, + /// True if hypercore is writeable, false if read-only + pub writeable: bool, +} + +impl Hypercore +where + T: RandomAccess + Debug + Send, +{ + /// Creates/opens new hypercore using given storage and options + pub(crate) async fn new( + mut storage: Storage, + mut options: HypercoreOptions, + ) -> Result, HypercoreError> { + let key_pair: Option = if options.open { + if options.key_pair.is_some() { + return Err(HypercoreError::BadArgument { + context: "Key pair can not be used when building an openable hypercore" + .to_string(), + }); + } + None + } else { + Some(options.key_pair.take().unwrap_or_else(|| { + let signing_key = generate_signing_key(); + PartialKeypair { + public: signing_key.verifying_key(), + secret: Some(signing_key), + } + })) + }; + + // Open/create oplog + let mut oplog_open_outcome = match Oplog::open(&key_pair, None)? { + Either::Right(value) => value, + Either::Left(instruction) => { + let info = storage.read_info(instruction).await?; + match Oplog::open(&key_pair, Some(info))? { + Either::Right(value) => value, + Either::Left(_) => { + return Err(HypercoreError::InvalidOperation { + context: "Could not open oplog".to_string(), + }); + } + } + } + }; + storage + .flush_infos(&oplog_open_outcome.infos_to_flush) + .await?; + + // Open/create tree + let mut tree = match MerkleTree::open( + &oplog_open_outcome.header.tree, + None, + #[cfg(feature = "cache")] + &options.node_cache_options, + )? { + Either::Right(value) => value, + Either::Left(instructions) => { + let infos = storage.read_infos(&instructions).await?; + match MerkleTree::open( + &oplog_open_outcome.header.tree, + Some(&infos), + #[cfg(feature = "cache")] + &options.node_cache_options, + )? { + Either::Right(value) => value, + Either::Left(_) => { + return Err(HypercoreError::InvalidOperation { + context: "Could not open tree".to_string(), + }); + } + } + } + }; + + // Create block store instance + let block_store = BlockStore::default(); + + // Open bitfield + let mut bitfield = match Bitfield::open(None) { + Either::Right(value) => value, + Either::Left(instruction) => { + let info = storage.read_info(instruction).await?; + match Bitfield::open(Some(info)) { + Either::Right(value) => value, + Either::Left(instruction) => { + let info = storage.read_info(instruction).await?; + match Bitfield::open(Some(info)) { + Either::Right(value) => value, + Either::Left(_) => { + return Err(HypercoreError::InvalidOperation { + context: "Could not open bitfield".to_string(), + }); + } + } + } + } + } + }; + + // Process entries stored only to the oplog and not yet flushed into bitfield or tree + if let Some(entries) = oplog_open_outcome.entries { + for entry in entries.iter() { + for node in &entry.tree_nodes { + tree.add_node(node.clone()); + } + + if let Some(bitfield_update) = &entry.bitfield { + bitfield.update(bitfield_update); + update_contiguous_length( + &mut oplog_open_outcome.header, + &bitfield, + bitfield_update, + ); + } + if let Some(tree_upgrade) = &entry.tree_upgrade { + // TODO: Generalize Either response stack + let mut changeset = + match tree.truncate(tree_upgrade.length, tree_upgrade.fork, None)? { + Either::Right(value) => value, + Either::Left(instructions) => { + let infos = storage.read_infos(&instructions).await?; + match tree.truncate( + tree_upgrade.length, + tree_upgrade.fork, + Some(&infos), + )? { + Either::Right(value) => value, + Either::Left(_) => { + return Err(HypercoreError::InvalidOperation { + context: format!( + "Could not truncate tree to length {}", + tree_upgrade.length + ), + }); + } + } + } + }; + changeset.ancestors = tree_upgrade.ancestors; + changeset.hash = Some(changeset.hash()); + changeset.signature = + Some(Signature::try_from(&*tree_upgrade.signature).map_err(|_| { + HypercoreError::InvalidSignature { + context: "Could not parse changeset signature".to_string(), + } + })?); + + // Update the header with this changeset to make in-memory value match that + // of the stored value. + oplog_open_outcome.oplog.update_header_with_changeset( + &changeset, + None, + &mut oplog_open_outcome.header, + )?; + + // TODO: Skip reorg hints for now, seems to only have to do with replication + // addReorgHint(header.hints.reorgs, tree, batch) + + // Commit changeset to in-memory tree + tree.commit(changeset)?; + } + } + } + + let oplog = oplog_open_outcome.oplog; + let header = oplog_open_outcome.header; + let key_pair = header.key_pair.clone(); + + Ok(Hypercore { + key_pair, + storage, + oplog, + tree, + block_store, + bitfield, + header, + skip_flush_count: 0, + }) + } + + /// Gets basic info about the Hypercore + pub fn info(&self) -> Info { + Info { + length: self.tree.length, + byte_length: self.tree.byte_length, + contiguous_length: self.header.hints.contiguous_length, + fork: self.tree.fork, + writeable: self.key_pair.secret.is_some(), + } + } + + /// Appends a data slice to the hypercore. + #[instrument(err, skip_all, fields(data_len = data.len()))] + pub async fn append(&mut self, data: &[u8]) -> Result { + self.append_batch(&[data]).await + } + + /// Appends a given batch of data slices to the hypercore. + #[instrument(err, skip_all, fields(batch_len = batch.as_ref().len()))] + pub async fn append_batch, B: AsRef<[A]>>( + &mut self, + batch: B, + ) -> Result { + let secret_key = match &self.key_pair.secret { + Some(key) => key, + None => return Err(HypercoreError::NotWritable), + }; + + if !batch.as_ref().is_empty() { + // Create a changeset for the tree + let mut changeset = self.tree.changeset(); + let mut batch_length: usize = 0; + for data in batch.as_ref().iter() { + batch_length += changeset.append(data.as_ref()); + } + changeset.hash_and_sign(secret_key); + + // Write the received data to the block store + let info = + self.block_store + .append_batch(batch.as_ref(), batch_length, self.tree.byte_length); + self.storage.flush_info(info).await?; + + // Append the changeset to the Oplog + let bitfield_update = BitfieldUpdate { + drop: false, + start: changeset.ancestors, + length: changeset.batch_length, + }; + let outcome = self.oplog.append_changeset( + &changeset, + Some(bitfield_update.clone()), + false, + &self.header, + )?; + self.storage.flush_infos(&outcome.infos_to_flush).await?; + self.header = outcome.header; + + // Write to bitfield + self.bitfield.update(&bitfield_update); + + // Contiguous length is known only now + update_contiguous_length(&mut self.header, &self.bitfield, &bitfield_update); + + // Commit changeset to in-memory tree + self.tree.commit(changeset)?; + + // Now ready to flush + if self.should_flush_bitfield_and_tree_and_oplog() { + self.flush_bitfield_and_tree_and_oplog(false).await?; + } + } + + // Return the new value + Ok(AppendOutcome { + length: self.tree.length, + byte_length: self.tree.byte_length, + }) + } + + /// Read value at given index, if any. + #[instrument(err, skip(self))] + pub async fn get(&mut self, index: u64) -> Result>, HypercoreError> { + if !self.bitfield.get(index) { + return Ok(None); + } + + let byte_range = self.byte_range(index, None).await?; + + // TODO: Generalize Either response stack + let data = match self.block_store.read(&byte_range, None) { + Either::Right(value) => value, + Either::Left(instruction) => { + let info = self.storage.read_info(instruction).await?; + match self.block_store.read(&byte_range, Some(info)) { + Either::Right(value) => value, + Either::Left(_) => { + return Err(HypercoreError::InvalidOperation { + context: "Could not read block storage range".to_string(), + }); + } + } + } + }; + + Ok(Some(data.to_vec())) + } + + /// Clear data for entries between start and end (exclusive) indexes. + #[instrument(err, skip(self))] + pub async fn clear(&mut self, start: u64, end: u64) -> Result<(), HypercoreError> { + if start >= end { + // NB: This is what javascript does, so we mimic that here + return Ok(()); + } + // Write to oplog + let infos_to_flush = self.oplog.clear(start, end)?; + self.storage.flush_infos(&infos_to_flush).await?; + + // Set bitfield + self.bitfield.set_range(start, end - start, false); + + // Set contiguous length + if start < self.header.hints.contiguous_length { + self.header.hints.contiguous_length = start; + } + + // Find the biggest hole that can be punched into the data + let start = if let Some(index) = self.bitfield.last_index_of(true, start) { + index + 1 + } else { + 0 + }; + let end = if let Some(index) = self.bitfield.index_of(true, end) { + index + } else { + self.tree.length + }; + + // Find byte offset for first value + let mut infos: Vec = Vec::new(); + let clear_offset = match self.tree.byte_offset(start, None)? { + Either::Right(value) => value, + Either::Left(instructions) => { + let new_infos = self.storage.read_infos_to_vec(&instructions).await?; + infos.extend(new_infos); + match self.tree.byte_offset(start, Some(&infos))? { + Either::Right(value) => value, + Either::Left(_) => { + return Err(HypercoreError::InvalidOperation { + context: format!("Could not read offset for index {start} from tree"), + }); + } + } + } + }; + + // Find byte range for last value + let last_byte_range = self.byte_range(end - 1, Some(&infos)).await?; + + let clear_length = (last_byte_range.index + last_byte_range.length) - clear_offset; + + // Clear blocks + let info_to_flush = self.block_store.clear(clear_offset, clear_length); + self.storage.flush_info(info_to_flush).await?; + + // Now ready to flush + if self.should_flush_bitfield_and_tree_and_oplog() { + self.flush_bitfield_and_tree_and_oplog(false).await?; + } + + Ok(()) + } + + /// Access the key pair. + pub fn key_pair(&self) -> &PartialKeypair { + &self.key_pair + } + + /// Create a proof for given request + #[instrument(err, skip_all)] + pub async fn create_proof( + &mut self, + block: Option, + hash: Option, + seek: Option, + upgrade: Option, + ) -> Result, HypercoreError> { + let valueless_proof = self + .create_valueless_proof(block, hash, seek, upgrade) + .await?; + let value: Option> = if let Some(block) = valueless_proof.block.as_ref() { + let value = self.get(block.index).await?; + if value.is_none() { + // The data value requested in the proof can not be read, we return None here + // and let the party requesting figure out what to do. + return Ok(None); + } + value + } else { + None + }; + Ok(Some(valueless_proof.into_proof(value))) + } + + /// Verify and apply proof received from peer, returns true if changed, false if not + /// possible to apply. + #[instrument(skip_all)] + pub async fn verify_and_apply_proof(&mut self, proof: &Proof) -> Result { + if proof.fork != self.tree.fork { + return Ok(false); + } + let changeset = self.verify_proof(proof).await?; + if !self.tree.commitable(&changeset) { + return Ok(false); + } + + // In javascript there's _verifyExclusive and _verifyShared based on changeset.upgraded, but + // here we do only one. _verifyShared groups together many subsequent changesets into a single + // oplog push, and then flushes in the end only for the whole group. + let bitfield_update: Option = if let Some(block) = &proof.block.as_ref() { + let byte_offset = + match self + .tree + .byte_offset_in_changeset(block.index, &changeset, None)? + { + Either::Right(value) => value, + Either::Left(instructions) => { + let infos = self.storage.read_infos_to_vec(&instructions).await?; + match self.tree.byte_offset_in_changeset( + block.index, + &changeset, + Some(&infos), + )? { + Either::Right(value) => value, + Either::Left(_) => { + return Err(HypercoreError::InvalidOperation { + context: format!( + "Could not read offset for index {} from tree", + block.index + ), + }); + } + } + } + }; + + // Write the value to the block store + let info_to_flush = self.block_store.put(&block.value, byte_offset); + self.storage.flush_info(info_to_flush).await?; + + // Return a bitfield update for the given value + Some(BitfieldUpdate { + drop: false, + start: block.index, + length: 1, + }) + } else { + // Only from DataBlock can there be changes to the bitfield + None + }; + + // Append the changeset to the Oplog + let outcome = self.oplog.append_changeset( + &changeset, + bitfield_update.clone(), + false, + &self.header, + )?; + self.storage.flush_infos(&outcome.infos_to_flush).await?; + self.header = outcome.header; + + if let Some(bitfield_update) = bitfield_update { + // Write to bitfield + self.bitfield.update(&bitfield_update); + + // Contiguous length is known only now + update_contiguous_length(&mut self.header, &self.bitfield, &bitfield_update); + } + + // Commit changeset to in-memory tree + self.tree.commit(changeset)?; + + // Now ready to flush + if self.should_flush_bitfield_and_tree_and_oplog() { + self.flush_bitfield_and_tree_and_oplog(false).await?; + } + Ok(true) + } + + /// Used to fill the nodes field of a `RequestBlock` during + /// synchronization. + #[instrument(err, skip(self))] + pub async fn missing_nodes(&mut self, index: u64) -> Result { + self.missing_nodes_from_merkle_tree_index(index * 2).await + } + + /// Get missing nodes using a merkle tree index. Advanced variant of missing_nodex + /// that allow for special cases of searching directly from the merkle tree. + #[instrument(err, skip(self))] + pub async fn missing_nodes_from_merkle_tree_index( + &mut self, + merkle_tree_index: u64, + ) -> Result { + match self.tree.missing_nodes(merkle_tree_index, None)? { + Either::Right(value) => Ok(value), + Either::Left(instructions) => { + let mut instructions = instructions; + let mut infos: Vec = vec![]; + loop { + infos.extend(self.storage.read_infos_to_vec(&instructions).await?); + match self.tree.missing_nodes(merkle_tree_index, Some(&infos))? { + Either::Right(value) => { + return Ok(value); + } + Either::Left(new_instructions) => { + instructions = new_instructions; + } + } + } + } + } + } + + /// Makes the hypercore read-only by deleting the secret key. Returns true if the + /// hypercore was changed, false if the hypercore was already read-only. This is useful + /// in scenarios where a hypercore should be made immutable after initial values have + /// been stored. + #[instrument(err, skip_all)] + pub async fn make_read_only(&mut self) -> Result { + if self.key_pair.secret.is_some() { + self.key_pair.secret = None; + self.header.key_pair.secret = None; + // Need to flush clearing traces to make sure both oplog slots are cleared + self.flush_bitfield_and_tree_and_oplog(true).await?; + Ok(true) + } else { + Ok(false) + } + } + + async fn byte_range( + &mut self, + index: u64, + initial_infos: Option<&[StoreInfo]>, + ) -> Result { + match self.tree.byte_range(index, initial_infos)? { + Either::Right(value) => Ok(value), + Either::Left(instructions) => { + let mut instructions = instructions; + let mut infos: Vec = vec![]; + loop { + infos.extend(self.storage.read_infos_to_vec(&instructions).await?); + match self.tree.byte_range(index, Some(&infos))? { + Either::Right(value) => { + return Ok(value); + } + Either::Left(new_instructions) => { + instructions = new_instructions; + } + } + } + } + } + } + + async fn create_valueless_proof( + &mut self, + block: Option, + hash: Option, + seek: Option, + upgrade: Option, + ) -> Result { + match self.tree.create_valueless_proof( + block.as_ref(), + hash.as_ref(), + seek.as_ref(), + upgrade.as_ref(), + None, + )? { + Either::Right(value) => Ok(value), + Either::Left(instructions) => { + let mut instructions = instructions; + let mut infos: Vec = vec![]; + loop { + infos.extend(self.storage.read_infos_to_vec(&instructions).await?); + match self.tree.create_valueless_proof( + block.as_ref(), + hash.as_ref(), + seek.as_ref(), + upgrade.as_ref(), + Some(&infos), + )? { + Either::Right(value) => { + return Ok(value); + } + Either::Left(new_instructions) => { + instructions = new_instructions; + } + } + } + } + } + } + + /// Verify a proof received from a peer. Returns a changeset that should be + /// applied. + async fn verify_proof(&mut self, proof: &Proof) -> Result { + match self.tree.verify_proof(proof, &self.key_pair.public, None)? { + Either::Right(value) => Ok(value), + Either::Left(instructions) => { + let infos = self.storage.read_infos_to_vec(&instructions).await?; + match self + .tree + .verify_proof(proof, &self.key_pair.public, Some(&infos))? + { + Either::Right(value) => Ok(value), + Either::Left(_) => Err(HypercoreError::InvalidOperation { + context: "Could not verify proof from tree".to_string(), + }), + } + } + } + } + + fn should_flush_bitfield_and_tree_and_oplog(&mut self) -> bool { + if self.skip_flush_count == 0 + || self.oplog.entries_byte_length >= MAX_OPLOG_ENTRIES_BYTE_SIZE + { + self.skip_flush_count = 3; + true + } else { + self.skip_flush_count -= 1; + false + } + } + + async fn flush_bitfield_and_tree_and_oplog( + &mut self, + clear_traces: bool, + ) -> Result<(), HypercoreError> { + let infos = self.bitfield.flush(); + self.storage.flush_infos(&infos).await?; + let infos = self.tree.flush(); + self.storage.flush_infos(&infos).await?; + let infos = self.oplog.flush(&self.header, clear_traces)?; + self.storage.flush_infos(&infos).await?; + Ok(()) + } +} + +fn update_contiguous_length( + header: &mut Header, + bitfield: &Bitfield, + bitfield_update: &BitfieldUpdate, +) { + let end = bitfield_update.start + bitfield_update.length; + let mut c = header.hints.contiguous_length; + if bitfield_update.drop { + if c <= end && c > bitfield_update.start { + c = bitfield_update.start; + } + } else if c <= end && c >= bitfield_update.start { + c = end; + while bitfield.get(c) { + c += 1; + } + } + + if c != header.hints.contiguous_length { + header.hints.contiguous_length = c; + } +} + +#[cfg(test)] +mod tests { + use super::*; + use random_access_memory::RandomAccessMemory; + + #[async_std::test] + async fn core_create_proof_block_only() -> Result<(), HypercoreError> { + let mut hypercore = create_hypercore_with_data(10).await?; + + let proof = hypercore + .create_proof(Some(RequestBlock { index: 4, nodes: 2 }), None, None, None) + .await? + .unwrap(); + let block = proof.block.unwrap(); + assert_eq!(proof.upgrade, None); + assert_eq!(proof.seek, None); + assert_eq!(block.index, 4); + assert_eq!(block.nodes.len(), 2); + assert_eq!(block.nodes[0].index, 10); + assert_eq!(block.nodes[1].index, 13); + Ok(()) + } + + #[async_std::test] + async fn core_create_proof_block_and_upgrade() -> Result<(), HypercoreError> { + let mut hypercore = create_hypercore_with_data(10).await?; + let proof = hypercore + .create_proof( + Some(RequestBlock { index: 4, nodes: 0 }), + None, + None, + Some(RequestUpgrade { + start: 0, + length: 10, + }), + ) + .await? + .unwrap(); + let block = proof.block.unwrap(); + let upgrade = proof.upgrade.unwrap(); + assert_eq!(proof.seek, None); + assert_eq!(block.index, 4); + assert_eq!(block.nodes.len(), 3); + assert_eq!(block.nodes[0].index, 10); + assert_eq!(block.nodes[1].index, 13); + assert_eq!(block.nodes[2].index, 3); + assert_eq!(upgrade.start, 0); + assert_eq!(upgrade.length, 10); + assert_eq!(upgrade.nodes.len(), 1); + assert_eq!(upgrade.nodes[0].index, 17); + assert_eq!(upgrade.additional_nodes.len(), 0); + Ok(()) + } + + #[async_std::test] + async fn core_create_proof_block_and_upgrade_and_additional() -> Result<(), HypercoreError> { + let mut hypercore = create_hypercore_with_data(10).await?; + let proof = hypercore + .create_proof( + Some(RequestBlock { index: 4, nodes: 0 }), + None, + None, + Some(RequestUpgrade { + start: 0, + length: 8, + }), + ) + .await? + .unwrap(); + let block = proof.block.unwrap(); + let upgrade = proof.upgrade.unwrap(); + assert_eq!(proof.seek, None); + assert_eq!(block.index, 4); + assert_eq!(block.nodes.len(), 3); + assert_eq!(block.nodes[0].index, 10); + assert_eq!(block.nodes[1].index, 13); + assert_eq!(block.nodes[2].index, 3); + assert_eq!(upgrade.start, 0); + assert_eq!(upgrade.length, 8); + assert_eq!(upgrade.nodes.len(), 0); + assert_eq!(upgrade.additional_nodes.len(), 1); + assert_eq!(upgrade.additional_nodes[0].index, 17); + Ok(()) + } + + #[async_std::test] + async fn core_create_proof_block_and_upgrade_from_existing_state() -> Result<(), HypercoreError> + { + let mut hypercore = create_hypercore_with_data(10).await?; + let proof = hypercore + .create_proof( + Some(RequestBlock { index: 1, nodes: 0 }), + None, + None, + Some(RequestUpgrade { + start: 1, + length: 9, + }), + ) + .await? + .unwrap(); + let block = proof.block.unwrap(); + let upgrade = proof.upgrade.unwrap(); + assert_eq!(proof.seek, None); + assert_eq!(block.index, 1); + assert_eq!(block.nodes.len(), 0); + assert_eq!(upgrade.start, 1); + assert_eq!(upgrade.length, 9); + assert_eq!(upgrade.nodes.len(), 3); + assert_eq!(upgrade.nodes[0].index, 5); + assert_eq!(upgrade.nodes[1].index, 11); + assert_eq!(upgrade.nodes[2].index, 17); + assert_eq!(upgrade.additional_nodes.len(), 0); + Ok(()) + } + + #[async_std::test] + async fn core_create_proof_block_and_upgrade_from_existing_state_with_additional( + ) -> Result<(), HypercoreError> { + let mut hypercore = create_hypercore_with_data(10).await?; + let proof = hypercore + .create_proof( + Some(RequestBlock { index: 1, nodes: 0 }), + None, + None, + Some(RequestUpgrade { + start: 1, + length: 5, + }), + ) + .await? + .unwrap(); + let block = proof.block.unwrap(); + let upgrade = proof.upgrade.unwrap(); + assert_eq!(proof.seek, None); + assert_eq!(block.index, 1); + assert_eq!(block.nodes.len(), 0); + assert_eq!(upgrade.start, 1); + assert_eq!(upgrade.length, 5); + assert_eq!(upgrade.nodes.len(), 2); + assert_eq!(upgrade.nodes[0].index, 5); + assert_eq!(upgrade.nodes[1].index, 9); + assert_eq!(upgrade.additional_nodes.len(), 2); + assert_eq!(upgrade.additional_nodes[0].index, 13); + assert_eq!(upgrade.additional_nodes[1].index, 17); + Ok(()) + } + + #[async_std::test] + async fn core_create_proof_block_and_seek_1_no_upgrade() -> Result<(), HypercoreError> { + let mut hypercore = create_hypercore_with_data(10).await?; + let proof = hypercore + .create_proof( + Some(RequestBlock { index: 4, nodes: 2 }), + None, + Some(RequestSeek { bytes: 8 }), + None, + ) + .await? + .unwrap(); + let block = proof.block.unwrap(); + assert_eq!(proof.seek, None); // seek included in block + assert_eq!(proof.upgrade, None); + assert_eq!(block.index, 4); + assert_eq!(block.nodes.len(), 2); + assert_eq!(block.nodes[0].index, 10); + assert_eq!(block.nodes[1].index, 13); + Ok(()) + } + + #[async_std::test] + async fn core_create_proof_block_and_seek_2_no_upgrade() -> Result<(), HypercoreError> { + let mut hypercore = create_hypercore_with_data(10).await?; + let proof = hypercore + .create_proof( + Some(RequestBlock { index: 4, nodes: 2 }), + None, + Some(RequestSeek { bytes: 10 }), + None, + ) + .await? + .unwrap(); + let block = proof.block.unwrap(); + assert_eq!(proof.seek, None); // seek included in block + assert_eq!(proof.upgrade, None); + assert_eq!(block.index, 4); + assert_eq!(block.nodes.len(), 2); + assert_eq!(block.nodes[0].index, 10); + assert_eq!(block.nodes[1].index, 13); + Ok(()) + } + + #[async_std::test] + async fn core_create_proof_block_and_seek_3_no_upgrade() -> Result<(), HypercoreError> { + let mut hypercore = create_hypercore_with_data(10).await?; + let proof = hypercore + .create_proof( + Some(RequestBlock { index: 4, nodes: 2 }), + None, + Some(RequestSeek { bytes: 13 }), + None, + ) + .await? + .unwrap(); + let block = proof.block.unwrap(); + let seek = proof.seek.unwrap(); + assert_eq!(proof.upgrade, None); + assert_eq!(block.index, 4); + assert_eq!(block.nodes.len(), 1); + assert_eq!(block.nodes[0].index, 10); + assert_eq!(seek.nodes.len(), 2); + assert_eq!(seek.nodes[0].index, 12); + assert_eq!(seek.nodes[1].index, 14); + Ok(()) + } + + #[async_std::test] + async fn core_create_proof_block_and_seek_to_tree_no_upgrade() -> Result<(), HypercoreError> { + let mut hypercore = create_hypercore_with_data(16).await?; + let proof = hypercore + .create_proof( + Some(RequestBlock { index: 0, nodes: 4 }), + None, + Some(RequestSeek { bytes: 26 }), + None, + ) + .await? + .unwrap(); + let block = proof.block.unwrap(); + let seek = proof.seek.unwrap(); + assert_eq!(proof.upgrade, None); + assert_eq!(block.nodes.len(), 3); + assert_eq!(block.nodes[0].index, 2); + assert_eq!(block.nodes[1].index, 5); + assert_eq!(block.nodes[2].index, 11); + assert_eq!(seek.nodes.len(), 2); + assert_eq!(seek.nodes[0].index, 19); + assert_eq!(seek.nodes[1].index, 27); + Ok(()) + } + + #[async_std::test] + async fn core_create_proof_block_and_seek_with_upgrade() -> Result<(), HypercoreError> { + let mut hypercore = create_hypercore_with_data(10).await?; + let proof = hypercore + .create_proof( + Some(RequestBlock { index: 4, nodes: 2 }), + None, + Some(RequestSeek { bytes: 13 }), + Some(RequestUpgrade { + start: 8, + length: 2, + }), + ) + .await? + .unwrap(); + let block = proof.block.unwrap(); + let seek = proof.seek.unwrap(); + let upgrade = proof.upgrade.unwrap(); + assert_eq!(block.index, 4); + assert_eq!(block.nodes.len(), 1); + assert_eq!(block.nodes[0].index, 10); + assert_eq!(seek.nodes.len(), 2); + assert_eq!(seek.nodes[0].index, 12); + assert_eq!(seek.nodes[1].index, 14); + assert_eq!(upgrade.nodes.len(), 1); + assert_eq!(upgrade.nodes[0].index, 17); + assert_eq!(upgrade.additional_nodes.len(), 0); + Ok(()) + } + + #[async_std::test] + async fn core_create_proof_seek_with_upgrade() -> Result<(), HypercoreError> { + let mut hypercore = create_hypercore_with_data(10).await?; + let proof = hypercore + .create_proof( + None, + None, + Some(RequestSeek { bytes: 13 }), + Some(RequestUpgrade { + start: 0, + length: 10, + }), + ) + .await? + .unwrap(); + let seek = proof.seek.unwrap(); + let upgrade = proof.upgrade.unwrap(); + assert_eq!(proof.block, None); + assert_eq!(seek.nodes.len(), 4); + assert_eq!(seek.nodes[0].index, 12); + assert_eq!(seek.nodes[1].index, 14); + assert_eq!(seek.nodes[2].index, 9); + assert_eq!(seek.nodes[3].index, 3); + assert_eq!(upgrade.nodes.len(), 1); + assert_eq!(upgrade.nodes[0].index, 17); + assert_eq!(upgrade.additional_nodes.len(), 0); + Ok(()) + } + + #[async_std::test] + async fn core_verify_proof_invalid_signature() -> Result<(), HypercoreError> { + let mut hypercore = create_hypercore_with_data(10).await?; + // Invalid clone hypercore with a different public key + let mut hypercore_clone = create_hypercore_with_data(0).await?; + let proof = hypercore + .create_proof( + None, + Some(RequestBlock { index: 6, nodes: 0 }), + None, + Some(RequestUpgrade { + start: 0, + length: 10, + }), + ) + .await? + .unwrap(); + assert!(hypercore_clone + .verify_and_apply_proof(&proof) + .await + .is_err()); + Ok(()) + } + + #[async_std::test] + async fn core_verify_and_apply_proof() -> Result<(), HypercoreError> { + let mut main = create_hypercore_with_data(10).await?; + let mut clone = create_hypercore_with_data_and_key_pair( + 0, + PartialKeypair { + public: main.key_pair.public, + secret: None, + }, + ) + .await?; + let index = 6; + let nodes = clone.missing_nodes(index).await?; + let proof = main + .create_proof( + None, + Some(RequestBlock { index, nodes }), + None, + Some(RequestUpgrade { + start: 0, + length: 10, + }), + ) + .await? + .unwrap(); + assert!(clone.verify_and_apply_proof(&proof).await?); + let main_info = main.info(); + let clone_info = clone.info(); + assert_eq!(main_info.byte_length, clone_info.byte_length); + assert_eq!(main_info.length, clone_info.length); + assert!(main.get(6).await?.is_some()); + assert!(clone.get(6).await?.is_none()); + + // Fetch data for index 6 and verify it is found + let index = 6; + let nodes = clone.missing_nodes(index).await?; + let proof = main + .create_proof(Some(RequestBlock { index, nodes }), None, None, None) + .await? + .unwrap(); + assert!(clone.verify_and_apply_proof(&proof).await?); + Ok(()) + } + + async fn create_hypercore_with_data( + length: u64, + ) -> Result, HypercoreError> { + let signing_key = generate_signing_key(); + create_hypercore_with_data_and_key_pair( + length, + PartialKeypair { + public: signing_key.verifying_key(), + secret: Some(signing_key), + }, + ) + .await + } + + async fn create_hypercore_with_data_and_key_pair( + length: u64, + key_pair: PartialKeypair, + ) -> Result, HypercoreError> { + let storage = Storage::new_memory().await?; + let mut hypercore = Hypercore::new( + storage, + HypercoreOptions { + key_pair: Some(key_pair), + open: false, + #[cfg(feature = "cache")] + node_cache_options: None, + }, + ) + .await?; + for i in 0..length { + hypercore.append(format!("#{}", i).as_bytes()).await?; + } + Ok(hypercore) + } +} diff --git a/vendor/hypercore/src/crypto/hash.rs b/vendor/hypercore/src/crypto/hash.rs new file mode 100644 index 00000000..f744048d --- /dev/null +++ b/vendor/hypercore/src/crypto/hash.rs @@ -0,0 +1,362 @@ +use blake2::{ + digest::{generic_array::GenericArray, typenum::U32, FixedOutput}, + Blake2b, Blake2bMac, Digest, +}; +use byteorder::{BigEndian, WriteBytesExt}; +use compact_encoding::State; +use ed25519_dalek::VerifyingKey; +use merkle_tree_stream::Node as NodeTrait; +use std::convert::AsRef; +use std::mem; +use std::ops::{Deref, DerefMut}; + +use crate::common::Node; + +// https://en.wikipedia.org/wiki/Merkle_tree#Second_preimage_attack +const LEAF_TYPE: [u8; 1] = [0x00]; +const PARENT_TYPE: [u8; 1] = [0x01]; +const ROOT_TYPE: [u8; 1] = [0x02]; +const HYPERCORE: [u8; 9] = *b"hypercore"; + +// These the output of, see `hash_namespace` test below for how they are produced +// https://github.com/holepunchto/hypercore/blob/cf08b72f14ed7d9ef6d497ebb3071ee0ae20967e/lib/caps.js#L16 +const TREE: [u8; 32] = [ + 0x9F, 0xAC, 0x70, 0xB5, 0xC, 0xA1, 0x4E, 0xFC, 0x4E, 0x91, 0xC8, 0x33, 0xB2, 0x4, 0xE7, 0x5B, + 0x8B, 0x5A, 0xAD, 0x8B, 0x58, 0x81, 0xBF, 0xC0, 0xAD, 0xB5, 0xEF, 0x38, 0xA3, 0x27, 0x5B, 0x9C, +]; + +// const DEFAULT_NAMESPACE: [u8; 32] = [ +// 0x41, 0x44, 0xEE, 0xA5, 0x31, 0xE4, 0x83, 0xD5, 0x4E, 0x0C, 0x14, 0xF4, 0xCA, 0x68, 0xE0, 0x64, +// 0x4F, 0x35, 0x53, 0x43, 0xFF, 0x6F, 0xCB, 0x0F, 0x00, 0x52, 0x00, 0xE1, 0x2C, 0xD7, 0x47, 0xCB, +// ]; + +// const MANIFEST: [u8; 32] = [ +// 0xE6, 0x4B, 0x71, 0x08, 0xEA, 0xCC, 0xE4, 0x7C, 0xFC, 0x61, 0xAC, 0x85, 0x05, 0x68, 0xF5, 0x5F, +// 0x8B, 0x15, 0xB8, 0x2E, 0xC5, 0xED, 0x78, 0xC4, 0xEC, 0x59, 0x7B, 0x03, 0x6E, 0x2A, 0x14, 0x98, +// ]; + +pub(crate) type Blake2bResult = GenericArray; +type Blake2b256 = Blake2b; + +/// `BLAKE2b` hash. +#[derive(Debug, Clone, PartialEq)] +pub(crate) struct Hash { + hash: Blake2bResult, +} + +impl Hash { + /// Hash a `Leaf` node. + #[allow(dead_code)] + pub(crate) fn from_leaf(data: &[u8]) -> Self { + let size = u64_as_be(data.len() as u64); + + let mut hasher = Blake2b256::new(); + hasher.update(LEAF_TYPE); + hasher.update(size); + hasher.update(data); + + Self { + hash: hasher.finalize(), + } + } + + /// Hash two `Leaf` nodes hashes together to form a `Parent` hash. + #[allow(dead_code)] + pub(crate) fn from_hashes(left: &Node, right: &Node) -> Self { + let (node1, node2) = if left.index <= right.index { + (left, right) + } else { + (right, left) + }; + + let size = u64_as_be(node1.length + node2.length); + + let mut hasher = Blake2b256::new(); + hasher.update(PARENT_TYPE); + hasher.update(size); + hasher.update(node1.hash()); + hasher.update(node2.hash()); + + Self { + hash: hasher.finalize(), + } + } + + /// Hash a public key. Useful to find the key you're looking for on a public + /// network without leaking the key itself. + #[allow(dead_code)] + pub(crate) fn for_discovery_key(public_key: VerifyingKey) -> Self { + let mut hasher = + Blake2bMac::::new_with_salt_and_personal(public_key.as_bytes(), &[], &[]).unwrap(); + blake2::digest::Update::update(&mut hasher, &HYPERCORE); + Self { + hash: hasher.finalize_fixed(), + } + } + + /// Hash a vector of `Root` nodes. + // Called `crypto.tree()` in the JS implementation. + #[allow(dead_code)] + pub(crate) fn from_roots(roots: &[impl AsRef]) -> Self { + let mut hasher = Blake2b256::new(); + hasher.update(ROOT_TYPE); + + for node in roots { + let node = node.as_ref(); + hasher.update(node.hash()); + hasher.update(u64_as_be(node.index())); + hasher.update(u64_as_be(node.len())); + } + + Self { + hash: hasher.finalize(), + } + } + + /// Returns a byte slice of this `Hash`'s contents. + pub(crate) fn as_bytes(&self) -> &[u8] { + self.hash.as_slice() + } + + // NB: The following methods mirror Javascript naming in + // https://github.com/mafintosh/hypercore-crypto/blob/master/index.js + // for v10 that use LE bytes. + + /// Hash data + pub(crate) fn data(data: &[u8]) -> Self { + let (mut state, mut size) = State::new_with_size(8); + state + .encode_u64(data.len() as u64, &mut size) + .expect("Encoding u64 should not fail"); + + let mut hasher = Blake2b256::new(); + hasher.update(LEAF_TYPE); + hasher.update(&size); + hasher.update(data); + + Self { + hash: hasher.finalize(), + } + } + + /// Hash a parent + pub(crate) fn parent(left: &Node, right: &Node) -> Self { + let (node1, node2) = if left.index <= right.index { + (left, right) + } else { + (right, left) + }; + + let (mut state, mut size) = State::new_with_size(8); + state + .encode_u64(node1.length + node2.length, &mut size) + .expect("Encoding u64 should not fail"); + + let mut hasher = Blake2b256::new(); + hasher.update(PARENT_TYPE); + hasher.update(&size); + hasher.update(node1.hash()); + hasher.update(node2.hash()); + + Self { + hash: hasher.finalize(), + } + } + + /// Hash a tree + pub(crate) fn tree(roots: &[impl AsRef]) -> Self { + let mut hasher = Blake2b256::new(); + hasher.update(ROOT_TYPE); + + for node in roots { + let node = node.as_ref(); + let (mut state, mut buffer) = State::new_with_size(16); + state + .encode_u64(node.index(), &mut buffer) + .expect("Encoding u64 should not fail"); + state + .encode_u64(node.len(), &mut buffer) + .expect("Encoding u64 should not fail"); + + hasher.update(node.hash()); + hasher.update(&buffer[..8]); + hasher.update(&buffer[8..]); + } + + Self { + hash: hasher.finalize(), + } + } +} + +fn u64_as_be(n: u64) -> [u8; 8] { + let mut size = [0u8; mem::size_of::()]; + size.as_mut().write_u64::(n).unwrap(); + size +} + +impl Deref for Hash { + type Target = Blake2bResult; + + fn deref(&self) -> &Self::Target { + &self.hash + } +} + +impl DerefMut for Hash { + fn deref_mut(&mut self) -> &mut Self::Target { + &mut self.hash + } +} + +/// Create a signable buffer for tree. This is treeSignable in Javascript. +/// See https://github.com/hypercore-protocol/hypercore/blob/70b271643c4e4b1e5ecae5bb579966dfe6361ff3/lib/caps.js#L17 +pub(crate) fn signable_tree(hash: &[u8], length: u64, fork: u64) -> Box<[u8]> { + let (mut state, mut buffer) = State::new_with_size(80); + state + .encode_fixed_32(&TREE, &mut buffer) + .expect("Should be able "); + state + .encode_fixed_32(hash, &mut buffer) + .expect("Encoding fixed 32 bytes should not fail"); + state + .encode_u64(length, &mut buffer) + .expect("Encoding u64 should not fail"); + state + .encode_u64(fork, &mut buffer) + .expect("Encoding u64 should not fail"); + buffer +} + +#[cfg(test)] +mod tests { + use super::*; + + use self::data_encoding::HEXLOWER; + use data_encoding; + + fn hash_with_extra_byte(data: &[u8], byte: u8) -> Box<[u8]> { + let mut hasher = Blake2b256::new(); + hasher.update(data); + hasher.update([byte]); + let hash = hasher.finalize(); + hash.as_slice().into() + } + + fn hex_bytes(hex: &str) -> Vec { + HEXLOWER.decode(hex.as_bytes()).unwrap() + } + + fn check_hash(hash: Hash, hex: &str) { + assert_eq!(hash.as_bytes(), &hex_bytes(hex)[..]); + } + + #[test] + fn leaf_hash() { + check_hash( + Hash::from_leaf(&[]), + "5187b7a8021bf4f2c004ea3a54cfece1754f11c7624d2363c7f4cf4fddd1441e", + ); + check_hash( + Hash::from_leaf(&[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), + "e1001bb0bb9322b6b202b2f737dc12181b11727168d33ca48ffe361c66cd1abe", + ); + } + + #[test] + fn parent_hash() { + let d1: &[u8] = &[0, 1, 2, 3, 4]; + let d2: &[u8] = &[42, 43, 44, 45, 46, 47, 48]; + let node1 = Node::new(0, Hash::from_leaf(d1).as_bytes().to_vec(), d1.len() as u64); + let node2 = Node::new(1, Hash::from_leaf(d2).as_bytes().to_vec(), d2.len() as u64); + check_hash( + Hash::from_hashes(&node1, &node2), + "6fac58578fa385f25a54c0637adaca71fdfddcea885d561f33d80c4487149a14", + ); + check_hash( + Hash::from_hashes(&node2, &node1), + "6fac58578fa385f25a54c0637adaca71fdfddcea885d561f33d80c4487149a14", + ); + } + + #[test] + fn root_hash() { + let d1: &[u8] = &[0, 1, 2, 3, 4]; + let d2: &[u8] = &[42, 43, 44, 45, 46, 47, 48]; + let node1 = Node::new(0, Hash::from_leaf(d1).as_bytes().to_vec(), d1.len() as u64); + let node2 = Node::new(1, Hash::from_leaf(d2).as_bytes().to_vec(), d2.len() as u64); + check_hash( + Hash::from_roots(&[&node1, &node2]), + "2d117e0bb15c6e5236b6ce764649baed1c41890da901a015341503146cc20bcd", + ); + check_hash( + Hash::from_roots(&[&node2, &node1]), + "9826c8c2d28fc309cce73a4b6208e83e5e4b0433d2369bfbf8858272153849f1", + ); + } + + #[test] + fn discovery_key_hashing() -> Result<(), ed25519_dalek::SignatureError> { + let public_key = VerifyingKey::from_bytes(&[ + 119, 143, 141, 149, 81, 117, 201, 46, 76, 237, 94, 79, 85, 99, 246, 155, 254, 192, 200, + 108, 198, 246, 112, 53, 44, 69, 121, 67, 102, 111, 230, 57, + ])?; + + let expected = &[ + 37, 167, 138, 168, 22, 21, 132, 126, 186, 0, 153, 93, 242, 157, 212, 29, 126, 227, 15, + 59, 1, 248, 146, 32, 159, 121, 183, 90, 87, 217, 137, 225, + ]; + + assert_eq!(Hash::for_discovery_key(public_key).as_bytes(), expected); + + Ok(()) + } + + // The following uses test data from + // https://github.com/mafintosh/hypercore-crypto/blob/master/test.js + + #[test] + fn hash_leaf() { + let data = b"hello world"; + check_hash( + Hash::data(data), + "9f1b578fd57a4df015493d2886aec9600eef913c3bb009768c7f0fb875996308", + ); + } + + #[test] + fn hash_parent() { + let data = b"hello world"; + let len = data.len() as u64; + let node1 = Node::new(0, Hash::data(data).as_bytes().to_vec(), len); + let node2 = Node::new(1, Hash::data(data).as_bytes().to_vec(), len); + check_hash( + Hash::parent(&node1, &node2), + "3ad0c9b58b771d1b7707e1430f37c23a23dd46e0c7c3ab9c16f79d25f7c36804", + ); + } + + #[test] + fn hash_tree() { + let hash: [u8; 32] = [0; 32]; + let node1 = Node::new(3, hash.to_vec(), 11); + let node2 = Node::new(9, hash.to_vec(), 2); + check_hash( + Hash::tree(&[&node1, &node2]), + "0e576a56b478cddb6ffebab8c494532b6de009466b2e9f7af9143fc54b9eaa36", + ); + } + + // This is the rust version from + // https://github.com/hypercore-protocol/hypercore/blob/70b271643c4e4b1e5ecae5bb579966dfe6361ff3/lib/caps.js + // and validates that our arrays match + #[test] + fn hash_namespace() { + let mut hasher = Blake2b256::new(); + hasher.update(HYPERCORE); + let hash = hasher.finalize(); + let ns = hash.as_slice(); + let tree: Box<[u8]> = { hash_with_extra_byte(ns, 0) }; + assert_eq!(tree, TREE.into()); + } +} diff --git a/vendor/hypercore/src/crypto/key_pair.rs b/vendor/hypercore/src/crypto/key_pair.rs new file mode 100644 index 00000000..683cb689 --- /dev/null +++ b/vendor/hypercore/src/crypto/key_pair.rs @@ -0,0 +1,57 @@ +//! Generate an `Ed25519` keypair. + +use ed25519_dalek::{Signature, Signer, SigningKey, Verifier, VerifyingKey}; +use rand::rngs::OsRng; + +use crate::HypercoreError; + +/// Key pair where for read-only hypercores the secret key can also be missing. +#[derive(Debug, Clone)] +pub struct PartialKeypair { + /// Public key + pub public: VerifyingKey, + /// Secret key. If None, the hypercore is read-only. + pub secret: Option, +} + +/// Generate a new `Ed25519` key pair. +pub fn generate() -> SigningKey { + let mut csprng = OsRng; + SigningKey::generate(&mut csprng) +} + +/// Sign a byte slice using a keypair's private key. +pub fn sign(signing_key: &SigningKey, msg: &[u8]) -> Signature { + signing_key.sign(msg) +} + +/// Verify a signature on a message with a keypair's public key. +pub fn verify( + public: &VerifyingKey, + msg: &[u8], + sig: Option<&Signature>, +) -> Result<(), HypercoreError> { + match sig { + None => Err(HypercoreError::InvalidSignature { + context: "No signature provided.".to_string(), + }), + Some(sig) => { + if public.verify(msg, sig).is_ok() { + Ok(()) + } else { + Err(HypercoreError::InvalidSignature { + context: "Signature could not be verified.".to_string(), + }) + } + } + } +} + +#[test] +fn can_verify_messages() { + let signing_key = generate(); + let from = b"hello"; + let sig = sign(&signing_key, from); + verify(&signing_key.verifying_key(), from, Some(&sig)).unwrap(); + verify(&signing_key.verifying_key(), b"oops", Some(&sig)).unwrap_err(); +} diff --git a/vendor/hypercore/src/crypto/manifest.rs b/vendor/hypercore/src/crypto/manifest.rs new file mode 100644 index 00000000..b3900c5a --- /dev/null +++ b/vendor/hypercore/src/crypto/manifest.rs @@ -0,0 +1,43 @@ +// These the output of the following link: +// https://github.com/holepunchto/hypercore/blob/cf08b72f14ed7d9ef6d497ebb3071ee0ae20967e/lib/caps.js#L16 + +const DEFAULT_NAMESPACE: [u8; 32] = [ + 0x41, 0x44, 0xEE, 0xA5, 0x31, 0xE4, 0x83, 0xD5, 0x4E, 0x0C, 0x14, 0xF4, 0xCA, 0x68, 0xE0, 0x64, + 0x4F, 0x35, 0x53, 0x43, 0xFF, 0x6F, 0xCB, 0x0F, 0x00, 0x52, 0x00, 0xE1, 0x2C, 0xD7, 0x47, 0xCB, +]; + +// TODO: Eventually this would be used in manifestHash +// https://github.com/holepunchto/hypercore/blob/cf08b72f14ed7d9ef6d497ebb3071ee0ae20967e/lib/manifest.js#L211 +// +// const MANIFEST: [u8; 32] = [ +// 0xE6, 0x4B, 0x71, 0x08, 0xEA, 0xCC, 0xE4, 0x7C, 0xFC, 0x61, 0xAC, 0x85, 0x05, 0x68, 0xF5, 0x5F, +// 0x8B, 0x15, 0xB8, 0x2E, 0xC5, 0xED, 0x78, 0xC4, 0xEC, 0x59, 0x7B, 0x03, 0x6E, 0x2A, 0x14, 0x98, +// ]; + +#[derive(Debug, Clone)] +pub(crate) struct Manifest { + pub(crate) hash: String, + // TODO: In v11 can be static + // pub(crate) static_core: Option, + pub(crate) signer: ManifestSigner, + // TODO: In v11 can have multiple signers + // pub(crate) multiple_signers: Option, +} + +#[derive(Debug, Clone)] +pub(crate) struct ManifestSigner { + pub(crate) signature: String, + pub(crate) namespace: [u8; 32], + pub(crate) public_key: [u8; 32], +} + +pub(crate) fn default_signer_manifest(public_key: [u8; 32]) -> Manifest { + Manifest { + hash: "blake2b".to_string(), + signer: ManifestSigner { + signature: "ed25519".to_string(), + namespace: DEFAULT_NAMESPACE, + public_key, + }, + } +} diff --git a/vendor/hypercore/src/crypto/mod.rs b/vendor/hypercore/src/crypto/mod.rs new file mode 100644 index 00000000..1bf2ab5b --- /dev/null +++ b/vendor/hypercore/src/crypto/mod.rs @@ -0,0 +1,9 @@ +//! Cryptographic functions. + +mod hash; +mod key_pair; +mod manifest; + +pub(crate) use hash::{signable_tree, Hash}; +pub use key_pair::{generate as generate_signing_key, sign, verify, PartialKeypair}; +pub(crate) use manifest::{default_signer_manifest, Manifest, ManifestSigner}; diff --git a/vendor/hypercore/src/data/mod.rs b/vendor/hypercore/src/data/mod.rs new file mode 100644 index 00000000..fa70a904 --- /dev/null +++ b/vendor/hypercore/src/data/mod.rs @@ -0,0 +1,46 @@ +use crate::common::{NodeByteRange, Store, StoreInfo, StoreInfoInstruction}; +use futures::future::Either; + +/// Block store +#[derive(Debug, Default)] +pub(crate) struct BlockStore {} + +impl BlockStore { + pub(crate) fn append_batch, B: AsRef<[A]>>( + &self, + batch: B, + batch_length: usize, + byte_length: u64, + ) -> StoreInfo { + let mut buffer: Vec = Vec::with_capacity(batch_length); + for data in batch.as_ref().iter() { + buffer.extend_from_slice(data.as_ref()); + } + StoreInfo::new_content(Store::Data, byte_length, &buffer) + } + + pub(crate) fn put(&self, value: &[u8], offset: u64) -> StoreInfo { + StoreInfo::new_content(Store::Data, offset, value) + } + + pub(crate) fn read( + &self, + byte_range: &NodeByteRange, + info: Option, + ) -> Either> { + if let Some(info) = info { + Either::Right(info.data.unwrap()) + } else { + Either::Left(StoreInfoInstruction::new_content( + Store::Data, + byte_range.index, + byte_range.length, + )) + } + } + + /// Clears a segment, returns infos to write to storage. + pub(crate) fn clear(&mut self, start: u64, length: u64) -> StoreInfo { + StoreInfo::new_delete(Store::Data, start, length) + } +} diff --git a/vendor/hypercore/src/encoding.rs b/vendor/hypercore/src/encoding.rs new file mode 100644 index 00000000..ed049a65 --- /dev/null +++ b/vendor/hypercore/src/encoding.rs @@ -0,0 +1,370 @@ +//! Hypercore-specific compact encodings +pub use compact_encoding::{CompactEncoding, EncodingError, EncodingErrorKind, State}; +use std::convert::TryInto; +use std::ops::{Deref, DerefMut}; + +use crate::{ + crypto::{Manifest, ManifestSigner}, + DataBlock, DataHash, DataSeek, DataUpgrade, Node, RequestBlock, RequestSeek, RequestUpgrade, +}; + +#[derive(Debug, Clone)] +/// Wrapper struct for compact_encoding::State +pub struct HypercoreState(pub State); + +impl Default for HypercoreState { + /// Passthrought to compact_encoding + fn default() -> Self { + Self::new() + } +} + +impl HypercoreState { + /// Passthrought to compact_encoding + pub fn new() -> HypercoreState { + HypercoreState(State::new()) + } + + /// Passthrought to compact_encoding + pub fn new_with_size(size: usize) -> (HypercoreState, Box<[u8]>) { + let (state, buffer) = State::new_with_size(size); + (HypercoreState(state), buffer) + } + + /// Passthrought to compact_encoding + pub fn new_with_start_and_end(start: usize, end: usize) -> HypercoreState { + HypercoreState(State::new_with_start_and_end(start, end)) + } + + /// Passthrought to compact_encoding + pub fn from_buffer(buffer: &[u8]) -> HypercoreState { + HypercoreState(State::from_buffer(buffer)) + } +} + +impl Deref for HypercoreState { + type Target = State; + + fn deref(&self) -> &Self::Target { + &self.0 + } +} + +impl DerefMut for HypercoreState { + fn deref_mut(&mut self) -> &mut State { + &mut self.0 + } +} + +impl CompactEncoding for HypercoreState { + fn preencode(&mut self, value: &Node) -> Result { + self.0.preencode(&value.index)?; + self.0.preencode(&value.length)?; + self.0.preencode_fixed_32() + } + + fn encode(&mut self, value: &Node, buffer: &mut [u8]) -> Result { + self.0.encode(&value.index, buffer)?; + self.0.encode(&value.length, buffer)?; + self.0.encode_fixed_32(&value.hash, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let index: u64 = self.0.decode(buffer)?; + let length: u64 = self.0.decode(buffer)?; + let hash: Box<[u8]> = self.0.decode_fixed_32(buffer)?; + Ok(Node::new(index, hash.to_vec(), length)) + } +} + +impl CompactEncoding> for HypercoreState { + fn preencode(&mut self, value: &Vec) -> Result { + let len = value.len(); + self.0.preencode(&len)?; + for val in value { + self.preencode(val)?; + } + Ok(self.end()) + } + + fn encode(&mut self, value: &Vec, buffer: &mut [u8]) -> Result { + let len = value.len(); + self.0.encode(&len, buffer)?; + for val in value { + self.encode(val, buffer)?; + } + Ok(self.start()) + } + + fn decode(&mut self, buffer: &[u8]) -> Result, EncodingError> { + let len: usize = self.0.decode(buffer)?; + let mut value = Vec::with_capacity(len); + for _ in 0..len { + value.push(self.decode(buffer)?); + } + Ok(value) + } +} + +impl CompactEncoding for HypercoreState { + fn preencode(&mut self, value: &RequestBlock) -> Result { + self.0.preencode(&value.index)?; + self.0.preencode(&value.nodes) + } + + fn encode(&mut self, value: &RequestBlock, buffer: &mut [u8]) -> Result { + self.0.encode(&value.index, buffer)?; + self.0.encode(&value.nodes, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let index: u64 = self.0.decode(buffer)?; + let nodes: u64 = self.0.decode(buffer)?; + Ok(RequestBlock { index, nodes }) + } +} + +impl CompactEncoding for HypercoreState { + fn preencode(&mut self, value: &RequestSeek) -> Result { + self.0.preencode(&value.bytes) + } + + fn encode(&mut self, value: &RequestSeek, buffer: &mut [u8]) -> Result { + self.0.encode(&value.bytes, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let bytes: u64 = self.0.decode(buffer)?; + Ok(RequestSeek { bytes }) + } +} + +impl CompactEncoding for HypercoreState { + fn preencode(&mut self, value: &RequestUpgrade) -> Result { + self.0.preencode(&value.start)?; + self.0.preencode(&value.length) + } + + fn encode( + &mut self, + value: &RequestUpgrade, + buffer: &mut [u8], + ) -> Result { + self.0.encode(&value.start, buffer)?; + self.0.encode(&value.length, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let start: u64 = self.0.decode(buffer)?; + let length: u64 = self.0.decode(buffer)?; + Ok(RequestUpgrade { start, length }) + } +} + +impl CompactEncoding for HypercoreState { + fn preencode(&mut self, value: &DataBlock) -> Result { + self.0.preencode(&value.index)?; + self.0.preencode(&value.value)?; + self.preencode(&value.nodes) + } + + fn encode(&mut self, value: &DataBlock, buffer: &mut [u8]) -> Result { + self.0.encode(&value.index, buffer)?; + self.0.encode(&value.value, buffer)?; + self.encode(&value.nodes, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let index: u64 = self.0.decode(buffer)?; + let value: Vec = self.0.decode(buffer)?; + let nodes: Vec = self.decode(buffer)?; + Ok(DataBlock { + index, + value, + nodes, + }) + } +} + +impl CompactEncoding for HypercoreState { + fn preencode(&mut self, value: &DataHash) -> Result { + self.0.preencode(&value.index)?; + self.preencode(&value.nodes) + } + + fn encode(&mut self, value: &DataHash, buffer: &mut [u8]) -> Result { + self.0.encode(&value.index, buffer)?; + self.encode(&value.nodes, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let index: u64 = self.0.decode(buffer)?; + let nodes: Vec = self.decode(buffer)?; + Ok(DataHash { index, nodes }) + } +} + +impl CompactEncoding for HypercoreState { + fn preencode(&mut self, value: &DataSeek) -> Result { + self.0.preencode(&value.bytes)?; + self.preencode(&value.nodes) + } + + fn encode(&mut self, value: &DataSeek, buffer: &mut [u8]) -> Result { + self.0.encode(&value.bytes, buffer)?; + self.encode(&value.nodes, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let bytes: u64 = self.0.decode(buffer)?; + let nodes: Vec = self.decode(buffer)?; + Ok(DataSeek { bytes, nodes }) + } +} + +impl CompactEncoding for HypercoreState { + fn preencode(&mut self, value: &DataUpgrade) -> Result { + self.0.preencode(&value.start)?; + self.0.preencode(&value.length)?; + self.preencode(&value.nodes)?; + self.preencode(&value.additional_nodes)?; + self.0.preencode(&value.signature) + } + + fn encode(&mut self, value: &DataUpgrade, buffer: &mut [u8]) -> Result { + self.0.encode(&value.start, buffer)?; + self.0.encode(&value.length, buffer)?; + self.encode(&value.nodes, buffer)?; + self.encode(&value.additional_nodes, buffer)?; + self.0.encode(&value.signature, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let start: u64 = self.0.decode(buffer)?; + let length: u64 = self.0.decode(buffer)?; + let nodes: Vec = self.decode(buffer)?; + let additional_nodes: Vec = self.decode(buffer)?; + let signature: Vec = self.0.decode(buffer)?; + Ok(DataUpgrade { + start, + length, + nodes, + additional_nodes, + signature, + }) + } +} + +impl CompactEncoding for State { + fn preencode(&mut self, value: &Manifest) -> Result { + self.add_end(1)?; // Version + self.add_end(1)?; // hash in one byte + self.add_end(1)?; // type in one byte + self.preencode(&value.signer) + } + + fn encode(&mut self, value: &Manifest, buffer: &mut [u8]) -> Result { + self.set_byte_to_buffer(0, buffer)?; // Version + if &value.hash == "blake2b" { + self.set_byte_to_buffer(0, buffer)?; // Version + } else { + return Err(EncodingError::new( + EncodingErrorKind::InvalidData, + &format!("Unknown hash: {}", &value.hash), + )); + } + // Type. 0: static, 1: signer, 2: multiple signers + self.set_byte_to_buffer(1, buffer)?; // Version + self.encode(&value.signer, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let version: u8 = self.decode_u8(buffer)?; + if version != 0 { + panic!("Unknown manifest version {}", version); + } + let hash_id: u8 = self.decode_u8(buffer)?; + let hash: String = if hash_id != 0 { + return Err(EncodingError::new( + EncodingErrorKind::InvalidData, + &format!("Unknown hash id: {hash_id}"), + )); + } else { + "blake2b".to_string() + }; + + let manifest_type: u8 = self.decode_u8(buffer)?; + if manifest_type != 1 { + return Err(EncodingError::new( + EncodingErrorKind::InvalidData, + &format!("Unknown manifest type: {manifest_type}"), + )); + } + let signer: ManifestSigner = self.decode(buffer)?; + + Ok(Manifest { hash, signer }) + } +} + +impl CompactEncoding for State { + fn preencode(&mut self, _value: &ManifestSigner) -> Result { + self.add_end(1)?; // Signature + self.preencode_fixed_32()?; + self.preencode_fixed_32() + } + + fn encode( + &mut self, + value: &ManifestSigner, + buffer: &mut [u8], + ) -> Result { + if &value.signature == "ed25519" { + self.set_byte_to_buffer(0, buffer)?; + } else { + return Err(EncodingError::new( + EncodingErrorKind::InvalidData, + &format!("Unknown signature type: {}", &value.signature), + )); + } + self.encode_fixed_32(&value.namespace, buffer)?; + self.encode_fixed_32(&value.public_key, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let signature_id: u8 = self.decode_u8(buffer)?; + let signature: String = if signature_id != 0 { + return Err(EncodingError::new( + EncodingErrorKind::InvalidData, + &format!("Unknown signature id: {signature_id}"), + )); + } else { + "ed25519".to_string() + }; + let namespace: [u8; 32] = + self.decode_fixed_32(buffer)? + .to_vec() + .try_into() + .map_err(|_err| { + EncodingError::new( + EncodingErrorKind::InvalidData, + "Invalid namespace in manifest signer", + ) + })?; + let public_key: [u8; 32] = + self.decode_fixed_32(buffer)? + .to_vec() + .try_into() + .map_err(|_err| { + EncodingError::new( + EncodingErrorKind::InvalidData, + "Invalid public key in manifest signer", + ) + })?; + + Ok(ManifestSigner { + signature, + namespace, + public_key, + }) + } +} diff --git a/vendor/hypercore/src/lib.rs b/vendor/hypercore/src/lib.rs new file mode 100644 index 00000000..a403e381 --- /dev/null +++ b/vendor/hypercore/src/lib.rs @@ -0,0 +1,101 @@ +#![forbid(unsafe_code, bad_style, future_incompatible)] +#![forbid(rust_2018_idioms, rust_2018_compatibility)] +#![forbid(missing_debug_implementations)] +#![forbid(missing_docs)] +#![warn(unreachable_pub)] +#![cfg_attr(test, deny(warnings))] +#![doc(test(attr(deny(warnings))))] + +//! ## Introduction +//! +//! Hypercore is a secure, distributed append-only log. Built for sharing +//! large datasets and streams of real time data as part of the [Dat] project. +//! This is a rust port of [the original Javascript version][holepunch-hypercore] +//! aiming for interoperability with LTS version. The primary way to use this +//! crate is through the [Hypercore] struct, which can be created using the +//! [HypercoreBuilder]. +//! +//! This crate supports WASM with `cargo build --target=wasm32-unknown-unknown`. +//! +//! ## Features +//! +//! ### `sparse` (default) +//! +//! When using disk storage, clearing values may create sparse files. On by default. +//! +//! ### `async-std` (default) +//! +//! Use the async-std runtime, on by default. Either this or `tokio` is mandatory. +//! +//! ### `tokio` +//! +//! Use the tokio runtime. Either this or `async_std` is mandatory. +//! +//! ### `cache` +//! +//! Use a moka cache for merkle tree nodes to speed-up reading. +//! +//! ## Example +//! ```rust +//! # #[cfg(feature = "tokio")] +//! # tokio_test::block_on(async { +//! # example().await; +//! # }); +//! # #[cfg(feature = "async-std")] +//! # async_std::task::block_on(async { +//! # example().await; +//! # }); +//! # async fn example() { +//! use hypercore::{HypercoreBuilder, Storage}; +//! +//! // Create an in-memory hypercore using a builder +//! let mut hypercore = HypercoreBuilder::new(Storage::new_memory().await.unwrap()) +//! .build() +//! .await +//! .unwrap(); +//! +//! // Append entries to the log +//! hypercore.append(b"Hello, ").await.unwrap(); +//! hypercore.append(b"world!").await.unwrap(); +//! +//! // Read entries from the log +//! assert_eq!(hypercore.get(0).await.unwrap().unwrap(), b"Hello, "); +//! assert_eq!(hypercore.get(1).await.unwrap().unwrap(), b"world!"); +//! # } +//! ``` +//! +//! Find more examples in the [examples] folder. +//! +//! [Dat]: https://github.com/datrs +//! [holepunch-hypercore]: https://github.com/holepunchto/hypercore +//! [Hypercore]: crate::core::Hypercore +//! [HypercoreBuilder]: crate::builder::HypercoreBuilder +//! [examples]: https://github.com/datrs/hypercore/tree/master/examples + +pub mod encoding; +pub mod prelude; + +mod bitfield; +mod builder; +mod common; +mod core; +mod crypto; +mod data; +mod oplog; +mod storage; +mod tree; + +#[cfg(feature = "cache")] +pub use crate::builder::CacheOptionsBuilder; +pub use crate::builder::HypercoreBuilder; +pub use crate::common::{ + DataBlock, DataHash, DataSeek, DataUpgrade, HypercoreError, Node, Proof, RequestBlock, + RequestSeek, RequestUpgrade, Store, +}; +pub use crate::core::{AppendOutcome, Hypercore, Info}; +pub use crate::crypto::{generate_signing_key, sign, verify, PartialKeypair}; +pub use crate::storage::Storage; +pub use ed25519_dalek::{ + SecretKey, Signature, SigningKey, VerifyingKey, KEYPAIR_LENGTH, PUBLIC_KEY_LENGTH, + SECRET_KEY_LENGTH, +}; diff --git a/vendor/hypercore/src/oplog/entry.rs b/vendor/hypercore/src/oplog/entry.rs new file mode 100644 index 00000000..e3681002 --- /dev/null +++ b/vendor/hypercore/src/oplog/entry.rs @@ -0,0 +1,164 @@ +use crate::encoding::{CompactEncoding, EncodingError, HypercoreState}; +use crate::{common::BitfieldUpdate, Node}; + +/// Entry tree upgrade +#[derive(Debug)] +pub(crate) struct EntryTreeUpgrade { + pub(crate) fork: u64, + pub(crate) ancestors: u64, + pub(crate) length: u64, + pub(crate) signature: Box<[u8]>, +} + +impl CompactEncoding for HypercoreState { + fn preencode(&mut self, value: &EntryTreeUpgrade) -> Result { + self.0.preencode(&value.fork)?; + self.0.preencode(&value.ancestors)?; + self.0.preencode(&value.length)?; + self.0.preencode(&value.signature) + } + + fn encode( + &mut self, + value: &EntryTreeUpgrade, + buffer: &mut [u8], + ) -> Result { + self.0.encode(&value.fork, buffer)?; + self.0.encode(&value.ancestors, buffer)?; + self.0.encode(&value.length, buffer)?; + self.0.encode(&value.signature, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let fork: u64 = self.0.decode(buffer)?; + let ancestors: u64 = self.0.decode(buffer)?; + let length: u64 = self.0.decode(buffer)?; + let signature: Box<[u8]> = self.0.decode(buffer)?; + Ok(EntryTreeUpgrade { + fork, + ancestors, + length, + signature, + }) + } +} + +impl CompactEncoding for HypercoreState { + fn preencode(&mut self, value: &BitfieldUpdate) -> Result { + self.0.add_end(1)?; + self.0.preencode(&value.start)?; + self.0.preencode(&value.length) + } + + fn encode( + &mut self, + value: &BitfieldUpdate, + buffer: &mut [u8], + ) -> Result { + let flags: u8 = if value.drop { 1 } else { 0 }; + self.0.set_byte_to_buffer(flags, buffer)?; + self.0.encode(&value.start, buffer)?; + self.0.encode(&value.length, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let flags = self.0.decode_u8(buffer)?; + let start: u64 = self.0.decode(buffer)?; + let length: u64 = self.0.decode(buffer)?; + Ok(BitfieldUpdate { + drop: flags == 1, + start, + length, + }) + } +} + +/// Oplog Entry +#[derive(Debug)] +pub struct Entry { + // TODO: This is a keyValueArray in JS + pub(crate) user_data: Vec, + pub(crate) tree_nodes: Vec, + pub(crate) tree_upgrade: Option, + pub(crate) bitfield: Option, +} + +impl CompactEncoding for HypercoreState { + fn preencode(&mut self, value: &Entry) -> Result { + self.0.add_end(1)?; // flags + if !value.user_data.is_empty() { + self.0.preencode(&value.user_data)?; + } + if !value.tree_nodes.is_empty() { + self.preencode(&value.tree_nodes)?; + } + if let Some(tree_upgrade) = &value.tree_upgrade { + self.preencode(tree_upgrade)?; + } + if let Some(bitfield) = &value.bitfield { + self.preencode(bitfield)?; + } + Ok(self.end()) + } + + fn encode(&mut self, value: &Entry, buffer: &mut [u8]) -> Result { + let start = self.0.start(); + self.0.add_start(1)?; + let mut flags: u8 = 0; + if !value.user_data.is_empty() { + flags |= 1; + self.0.encode(&value.user_data, buffer)?; + } + if !value.tree_nodes.is_empty() { + flags |= 2; + self.encode(&value.tree_nodes, buffer)?; + } + if let Some(tree_upgrade) = &value.tree_upgrade { + flags |= 4; + self.encode(tree_upgrade, buffer)?; + } + if let Some(bitfield) = &value.bitfield { + flags |= 8; + self.encode(bitfield, buffer)?; + } + + buffer[start] = flags; + Ok(self.0.start()) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let flags = self.0.decode_u8(buffer)?; + let user_data: Vec = if flags & 1 != 0 { + self.0.decode(buffer)? + } else { + vec![] + }; + + let tree_nodes: Vec = if flags & 2 != 0 { + self.decode(buffer)? + } else { + vec![] + }; + + let tree_upgrade: Option = if flags & 4 != 0 { + let value: EntryTreeUpgrade = self.decode(buffer)?; + Some(value) + } else { + None + }; + + let bitfield: Option = if flags & 8 != 0 { + let value: BitfieldUpdate = self.decode(buffer)?; + Some(value) + } else { + None + }; + + Ok(Entry { + user_data, + tree_nodes, + tree_upgrade, + bitfield, + }) + } +} diff --git a/vendor/hypercore/src/oplog/header.rs b/vendor/hypercore/src/oplog/header.rs new file mode 100644 index 00000000..aa27dcec --- /dev/null +++ b/vendor/hypercore/src/oplog/header.rs @@ -0,0 +1,325 @@ +use compact_encoding::EncodingErrorKind; +use compact_encoding::{CompactEncoding, EncodingError, State}; +use ed25519_dalek::{SigningKey, PUBLIC_KEY_LENGTH, SECRET_KEY_LENGTH}; +use std::convert::TryInto; + +use crate::crypto::default_signer_manifest; +use crate::crypto::Manifest; +use crate::PartialKeypair; +use crate::VerifyingKey; + +/// Oplog header. +#[derive(Debug, Clone)] +pub(crate) struct Header { + // TODO: v11 has external + // pub(crate) external: Option, + // NB: This is the manifest hash in v11, right now + // just the public key, + pub(crate) key: [u8; 32], + pub(crate) manifest: Manifest, + pub(crate) key_pair: PartialKeypair, + // TODO: This is a keyValueArray in JS + pub(crate) user_data: Vec, + pub(crate) tree: HeaderTree, + pub(crate) hints: HeaderHints, +} + +impl Header { + /// Creates a new Header from given key pair + pub(crate) fn new(key_pair: PartialKeypair) -> Self { + let key = key_pair.public.to_bytes(); + let manifest = default_signer_manifest(key); + Self { + key, + manifest, + key_pair, + user_data: vec![], + tree: HeaderTree::new(), + hints: HeaderHints { + reorgs: vec![], + contiguous_length: 0, + }, + } + // Javascript side, initial header + // header = { + // external: null, + // key: opts.key || (compat ? manifest.signer.publicKey : manifestHash(manifest)), + // manifest, + // keyPair, + // userData: [], + // tree: { + // fork: 0, + // length: 0, + // rootHash: null, + // signature: null + // }, + // hints: { + // reorgs: [], + // contiguousLength: 0 + // } + // } + } +} + +/// Oplog header tree +#[derive(Debug, PartialEq, Clone)] +pub(crate) struct HeaderTree { + pub(crate) fork: u64, + pub(crate) length: u64, + pub(crate) root_hash: Box<[u8]>, + pub(crate) signature: Box<[u8]>, +} + +impl HeaderTree { + pub(crate) fn new() -> Self { + Self { + fork: 0, + length: 0, + root_hash: Box::new([]), + signature: Box::new([]), + } + } +} + +impl CompactEncoding for State { + fn preencode(&mut self, value: &HeaderTree) -> Result { + self.preencode(&value.fork)?; + self.preencode(&value.length)?; + self.preencode(&value.root_hash)?; + self.preencode(&value.signature) + } + + fn encode(&mut self, value: &HeaderTree, buffer: &mut [u8]) -> Result { + self.encode(&value.fork, buffer)?; + self.encode(&value.length, buffer)?; + self.encode(&value.root_hash, buffer)?; + self.encode(&value.signature, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let fork: u64 = self.decode(buffer)?; + let length: u64 = self.decode(buffer)?; + let root_hash: Box<[u8]> = self.decode(buffer)?; + let signature: Box<[u8]> = self.decode(buffer)?; + Ok(HeaderTree { + fork, + length, + root_hash, + signature, + }) + } +} + +/// NB: In Javascript's sodium the secret key contains in itself also the public key, so to +/// maintain binary compatibility, we store the public key in the oplog now twice. +impl CompactEncoding for State { + fn preencode(&mut self, value: &PartialKeypair) -> Result { + self.add_end(1 + PUBLIC_KEY_LENGTH)?; + match &value.secret { + Some(_) => { + // Also add room for the public key + self.add_end(1 + SECRET_KEY_LENGTH + PUBLIC_KEY_LENGTH) + } + None => self.add_end(1), + } + } + + fn encode( + &mut self, + value: &PartialKeypair, + buffer: &mut [u8], + ) -> Result { + let public_key_bytes: Box<[u8]> = value.public.as_bytes().to_vec().into_boxed_slice(); + self.encode(&public_key_bytes, buffer)?; + match &value.secret { + Some(secret_key) => { + let mut secret_key_bytes: Vec = + Vec::with_capacity(SECRET_KEY_LENGTH + PUBLIC_KEY_LENGTH); + secret_key_bytes.extend_from_slice(&secret_key.to_bytes()); + secret_key_bytes.extend_from_slice(&public_key_bytes); + let secret_key_bytes: Box<[u8]> = secret_key_bytes.into_boxed_slice(); + self.encode(&secret_key_bytes, buffer) + } + None => self.set_byte_to_buffer(0, buffer), + } + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let public_key_bytes: Box<[u8]> = self.decode(buffer)?; + let public_key_bytes: [u8; PUBLIC_KEY_LENGTH] = + public_key_bytes[0..PUBLIC_KEY_LENGTH].try_into().unwrap(); + let secret_key_bytes: Box<[u8]> = self.decode(buffer)?; + let secret: Option = if secret_key_bytes.is_empty() { + None + } else { + let secret_key_bytes: [u8; SECRET_KEY_LENGTH] = + secret_key_bytes[0..SECRET_KEY_LENGTH].try_into().unwrap(); + Some(SigningKey::from_bytes(&secret_key_bytes)) + }; + + Ok(PartialKeypair { + public: VerifyingKey::from_bytes(&public_key_bytes).unwrap(), + secret, + }) + } +} + +/// Oplog header hints +#[derive(Debug, Clone)] +pub(crate) struct HeaderHints { + pub(crate) reorgs: Vec, + pub(crate) contiguous_length: u64, +} + +impl CompactEncoding for State { + fn preencode(&mut self, value: &HeaderHints) -> Result { + self.preencode(&value.reorgs)?; + self.preencode(&value.contiguous_length) + } + + fn encode(&mut self, value: &HeaderHints, buffer: &mut [u8]) -> Result { + self.encode(&value.reorgs, buffer)?; + self.encode(&value.contiguous_length, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + Ok(HeaderHints { + reorgs: self.decode(buffer)?, + contiguous_length: self.decode(buffer)?, + }) + } +} + +impl CompactEncoding
for State { + fn preencode(&mut self, value: &Header) -> Result { + self.add_end(1)?; // Version + self.add_end(1)?; // Flags + self.preencode_fixed_32()?; // key + self.preencode(&value.manifest)?; + self.preencode(&value.key_pair)?; + self.preencode(&value.user_data)?; + self.preencode(&value.tree)?; + self.preencode(&value.hints) + } + + fn encode(&mut self, value: &Header, buffer: &mut [u8]) -> Result { + self.set_byte_to_buffer(1, buffer)?; // Version + let flags: u8 = 2 | 4; // Manifest and key pair, TODO: external=1 + self.set_byte_to_buffer(flags, buffer)?; + self.encode_fixed_32(&value.key, buffer)?; + self.encode(&value.manifest, buffer)?; + self.encode(&value.key_pair, buffer)?; + self.encode(&value.user_data, buffer)?; + self.encode(&value.tree, buffer)?; + self.encode(&value.hints, buffer) + } + + fn decode(&mut self, buffer: &[u8]) -> Result { + let version: u8 = self.decode_u8(buffer)?; + if version != 1 { + panic!("Unknown oplog version {}", version); + } + let _flags: u8 = self.decode_u8(buffer)?; + let key: [u8; 32] = self + .decode_fixed_32(buffer)? + .to_vec() + .try_into() + .map_err(|_err| { + EncodingError::new( + EncodingErrorKind::InvalidData, + "Invalid key in oplog header", + ) + })?; + let manifest: Manifest = self.decode(buffer)?; + let key_pair: PartialKeypair = self.decode(buffer)?; + let user_data: Vec = self.decode(buffer)?; + let tree: HeaderTree = self.decode(buffer)?; + let hints: HeaderHints = self.decode(buffer)?; + + Ok(Header { + key, + manifest, + key_pair, + user_data, + tree, + hints, + }) + } +} + +#[cfg(test)] +mod tests { + use super::*; + + use crate::crypto::generate_signing_key; + + #[test] + fn encode_partial_key_pair() -> Result<(), EncodingError> { + let mut enc_state = State::new(); + let signing_key = generate_signing_key(); + let key_pair = PartialKeypair { + public: signing_key.verifying_key(), + secret: Some(signing_key), + }; + enc_state.preencode(&key_pair)?; + let mut buffer = enc_state.create_buffer(); + // Pub key: 1 byte for length, 32 bytes for content + // Sec key: 1 byte for length, 64 bytes for data + let expected_len = 1 + 32 + 1 + 64; + assert_eq!(buffer.len(), expected_len); + assert_eq!(enc_state.end(), expected_len); + assert_eq!(enc_state.start(), 0); + enc_state.encode(&key_pair, &mut buffer)?; + let mut dec_state = State::from_buffer(&buffer); + let key_pair_ret: PartialKeypair = dec_state.decode(&buffer)?; + assert_eq!(key_pair.public, key_pair_ret.public); + assert_eq!( + key_pair.secret.unwrap().to_bytes(), + key_pair_ret.secret.unwrap().to_bytes() + ); + Ok(()) + } + + #[test] + fn encode_tree() -> Result<(), EncodingError> { + let mut enc_state = State::new(); + let tree = HeaderTree::new(); + enc_state.preencode(&tree)?; + let mut buffer = enc_state.create_buffer(); + enc_state.encode(&tree, &mut buffer)?; + let mut dec_state = State::from_buffer(&buffer); + let tree_ret: HeaderTree = dec_state.decode(&buffer)?; + assert_eq!(tree, tree_ret); + Ok(()) + } + + #[test] + fn encode_header() -> Result<(), EncodingError> { + let mut enc_state = State::new(); + let signing_key = generate_signing_key(); + let signing_key = PartialKeypair { + public: signing_key.verifying_key(), + secret: Some(signing_key), + }; + let header = Header::new(signing_key); + enc_state.preencode(&header)?; + let mut buffer = enc_state.create_buffer(); + enc_state.encode(&header, &mut buffer)?; + let mut dec_state = State::from_buffer(&buffer); + let header_ret: Header = dec_state.decode(&buffer)?; + assert_eq!(header.key_pair.public, header_ret.key_pair.public); + assert_eq!(header.tree.fork, header_ret.tree.fork); + assert_eq!(header.tree.length, header_ret.tree.length); + assert_eq!(header.tree.length, header_ret.tree.length); + assert_eq!(header.manifest.hash, header_ret.manifest.hash); + assert_eq!( + header.manifest.signer.public_key, + header_ret.manifest.signer.public_key + ); + assert_eq!( + header.manifest.signer.signature, + header_ret.manifest.signer.signature + ); + Ok(()) + } +} diff --git a/vendor/hypercore/src/oplog/mod.rs b/vendor/hypercore/src/oplog/mod.rs new file mode 100644 index 00000000..6c720201 --- /dev/null +++ b/vendor/hypercore/src/oplog/mod.rs @@ -0,0 +1,495 @@ +use futures::future::Either; +use std::convert::{TryFrom, TryInto}; + +use crate::common::{BitfieldUpdate, Store, StoreInfo, StoreInfoInstruction}; +use crate::encoding::{CompactEncoding, HypercoreState}; +use crate::tree::MerkleTreeChangeset; +use crate::{HypercoreError, Node, PartialKeypair}; + +mod entry; +mod header; + +pub(crate) use entry::{Entry, EntryTreeUpgrade}; +pub(crate) use header::{Header, HeaderTree}; + +pub(crate) const MAX_OPLOG_ENTRIES_BYTE_SIZE: u64 = 65536; +const HEADER_SIZE: usize = 4096; + +/// Oplog. +/// +/// There are two memory areas for an `Header` in `RandomAccessStorage`: one is the current +/// and one is the older. Which one is used depends on the value stored in the eigth byte's +/// eight bit of the stored headers. +#[derive(Debug)] +pub(crate) struct Oplog { + header_bits: [bool; 2], + pub(crate) entries_length: u64, + pub(crate) entries_byte_length: u64, +} + +/// Oplog create header outcome +#[derive(Debug)] +pub(crate) struct OplogCreateHeaderOutcome { + pub(crate) header: Header, + pub(crate) infos_to_flush: Box<[StoreInfo]>, +} + +/// Oplog open outcome +#[derive(Debug)] +pub(crate) struct OplogOpenOutcome { + pub(crate) oplog: Oplog, + pub(crate) header: Header, + pub(crate) infos_to_flush: Box<[StoreInfo]>, + pub(crate) entries: Option>, +} + +impl OplogOpenOutcome { + pub(crate) fn new(oplog: Oplog, header: Header, infos_to_flush: Box<[StoreInfo]>) -> Self { + Self { + oplog, + header, + infos_to_flush, + entries: None, + } + } + pub(crate) fn from_create_header_outcome( + oplog: Oplog, + create_header_outcome: OplogCreateHeaderOutcome, + ) -> Self { + Self { + oplog, + header: create_header_outcome.header, + infos_to_flush: create_header_outcome.infos_to_flush, + entries: None, + } + } +} + +#[repr(usize)] +enum OplogSlot { + FirstHeader = 0, + SecondHeader = HEADER_SIZE, + Entries = HEADER_SIZE * 2, +} + +#[derive(Debug)] +struct ValidateLeaderOutcome { + state: HypercoreState, + header_bit: bool, + partial_bit: bool, +} + +// The first set of bits is [1, 0], see `get_next_header_oplog_slot_and_bit_value` for how +// they change. +const INITIAL_HEADER_BITS: [bool; 2] = [true, false]; + +impl Oplog { + /// Opens an existing Oplog from existing byte buffer or creates a new one. + pub(crate) fn open( + key_pair: &Option, + info: Option, + ) -> Result, HypercoreError> { + match info { + None => Ok(Either::Left(StoreInfoInstruction::new_all_content( + Store::Oplog, + ))), + Some(info) => { + let existing = info.data.expect("Could not get data of existing oplog"); + // First read and validate both headers stored in the existing oplog + let h1_outcome = Self::validate_leader(OplogSlot::FirstHeader as usize, &existing)?; + let h2_outcome = + Self::validate_leader(OplogSlot::SecondHeader as usize, &existing)?; + + // Depending on what is stored, the state needs to be set accordingly. + // See `get_next_header_oplog_slot_and_bit_value` for details on header_bits. + let mut outcome: OplogOpenOutcome = if let Some(mut h1_outcome) = h1_outcome { + let (header, header_bits): (Header, [bool; 2]) = + if let Some(mut h2_outcome) = h2_outcome { + let header_bits = [h1_outcome.header_bit, h2_outcome.header_bit]; + let header: Header = if header_bits[0] == header_bits[1] { + (*h1_outcome.state).decode(&existing)? + } else { + (*h2_outcome.state).decode(&existing)? + }; + (header, header_bits) + } else { + ( + (*h1_outcome.state).decode(&existing)?, + [h1_outcome.header_bit, h1_outcome.header_bit], + ) + }; + let oplog = Oplog { + header_bits, + entries_length: 0, + entries_byte_length: 0, + }; + OplogOpenOutcome::new(oplog, header, Box::new([])) + } else if let Some(mut h2_outcome) = h2_outcome { + // This shouldn't happen because the first header is saved to the first slot + // but Javascript supports this so we should too. + let header_bits: [bool; 2] = [!h2_outcome.header_bit, h2_outcome.header_bit]; + let oplog = Oplog { + header_bits, + entries_length: 0, + entries_byte_length: 0, + }; + OplogOpenOutcome::new( + oplog, + (*h2_outcome.state).decode(&existing)?, + Box::new([]), + ) + } else if let Some(key_pair) = key_pair { + // There is nothing in the oplog, start from fresh given key pair. + Self::fresh(key_pair.clone())? + } else { + // The storage is empty and no key pair given, erroring + return Err(HypercoreError::EmptyStorage { + store: Store::Oplog, + }); + }; + + // Read headers that might be stored in the existing content + if existing.len() > OplogSlot::Entries as usize { + let mut entry_offset = OplogSlot::Entries as usize; + let mut entries: Vec = Vec::new(); + let mut partials: Vec = Vec::new(); + while let Some(mut entry_outcome) = + Self::validate_leader(entry_offset, &existing)? + { + let entry: Entry = entry_outcome.state.decode(&existing)?; + entries.push(entry); + partials.push(entry_outcome.partial_bit); + entry_offset = (*entry_outcome.state).end(); + } + + // Remove all trailing partial entries + while !partials.is_empty() && partials[partials.len() - 1] { + entries.pop(); + } + outcome.entries = Some(entries.into_boxed_slice()); + } + Ok(Either::Right(outcome)) + } + } + } + + /// Appends an upgraded changeset to the Oplog. + pub(crate) fn append_changeset( + &mut self, + changeset: &MerkleTreeChangeset, + bitfield_update: Option, + atomic: bool, + header: &Header, + ) -> Result { + let mut header: Header = header.clone(); + let entry = self.update_header_with_changeset(changeset, bitfield_update, &mut header)?; + + Ok(OplogCreateHeaderOutcome { + header, + infos_to_flush: self.append_entries(&[entry], atomic)?, + }) + } + + pub(crate) fn update_header_with_changeset( + &mut self, + changeset: &MerkleTreeChangeset, + bitfield_update: Option, + header: &mut Header, + ) -> Result { + let tree_nodes: Vec = changeset.nodes.clone(); + let entry: Entry = if changeset.upgraded { + let hash = changeset + .hash + .as_ref() + .expect("Upgraded changeset must have a hash before appended"); + let signature = changeset + .signature + .expect("Upgraded changeset must be signed before appended"); + let signature: Box<[u8]> = signature.to_bytes().into(); + header.tree.root_hash = hash.clone(); + header.tree.signature = signature.clone(); + header.tree.length = changeset.length; + + Entry { + user_data: vec![], + tree_nodes, + tree_upgrade: Some(EntryTreeUpgrade { + fork: changeset.fork, + ancestors: changeset.ancestors, + length: changeset.length, + signature, + }), + bitfield: bitfield_update, + } + } else { + Entry { + user_data: vec![], + tree_nodes, + tree_upgrade: None, + bitfield: bitfield_update, + } + }; + Ok(entry) + } + + /// Clears a segment, returns infos to write to storage. + pub(crate) fn clear( + &mut self, + start: u64, + end: u64, + ) -> Result, HypercoreError> { + let entry: Entry = Entry { + user_data: vec![], + tree_nodes: vec![], + tree_upgrade: None, + bitfield: Some(BitfieldUpdate { + drop: true, + start, + length: end - start, + }), + }; + self.append_entries(&[entry], false) + } + + /// Flushes pending changes, returns infos to write to storage. + pub(crate) fn flush( + &mut self, + header: &Header, + clear_traces: bool, + ) -> Result, HypercoreError> { + let (new_header_bits, infos_to_flush) = if clear_traces { + // When clearing traces, both slots need to be cleared, hence + // do this twice, but for the first time, ignore the truncate + // store info, to end up with three StoreInfos. + let (new_header_bits, infos_to_flush) = + Self::insert_header(header, 0, self.header_bits, clear_traces)?; + let mut combined_infos_to_flush: Vec = + infos_to_flush.into_vec().drain(0..1).into_iter().collect(); + let (new_header_bits, infos_to_flush) = + Self::insert_header(header, 0, new_header_bits, clear_traces)?; + combined_infos_to_flush.extend(infos_to_flush.into_vec()); + (new_header_bits, combined_infos_to_flush.into_boxed_slice()) + } else { + Self::insert_header(header, 0, self.header_bits, clear_traces)? + }; + self.entries_byte_length = 0; + self.entries_length = 0; + self.header_bits = new_header_bits; + Ok(infos_to_flush) + } + + /// Appends a batch of entries to the Oplog. + fn append_entries( + &mut self, + batch: &[Entry], + atomic: bool, + ) -> Result, HypercoreError> { + let len = batch.len(); + let header_bit = self.get_current_header_bit(); + // Leave room for leaders + let mut state = HypercoreState::new_with_start_and_end(0, len * 8); + + for entry in batch.iter() { + state.preencode(entry)?; + } + + let mut buffer = state.create_buffer(); + for (i, entry) in batch.iter().enumerate() { + (*state).add_start(8)?; + let start = state.start(); + let partial_bit: bool = atomic && i < len - 1; + state.encode(entry, &mut buffer)?; + Self::prepend_leader( + state.start() - start, + header_bit, + partial_bit, + &mut state, + &mut buffer, + )?; + } + + let index = OplogSlot::Entries as u64 + self.entries_byte_length; + self.entries_length += len as u64; + self.entries_byte_length += buffer.len() as u64; + + Ok(vec![StoreInfo::new_content(Store::Oplog, index, &buffer)].into_boxed_slice()) + } + + fn fresh(key_pair: PartialKeypair) -> Result { + let entries_length: u64 = 0; + let entries_byte_length: u64 = 0; + let header = Header::new(key_pair); + let (header_bits, infos_to_flush) = + Self::insert_header(&header, entries_byte_length, INITIAL_HEADER_BITS, false)?; + let oplog = Oplog { + header_bits, + entries_length, + entries_byte_length, + }; + Ok(OplogOpenOutcome::from_create_header_outcome( + oplog, + OplogCreateHeaderOutcome { + header, + infos_to_flush, + }, + )) + } + + fn insert_header( + header: &Header, + entries_byte_length: u64, + current_header_bits: [bool; 2], + clear_traces: bool, + ) -> Result<([bool; 2], Box<[StoreInfo]>), HypercoreError> { + // The first 8 bytes will be filled with `prepend_leader`. + let data_start_index: usize = 8; + let mut state = HypercoreState::new_with_start_and_end(data_start_index, data_start_index); + + // Get the right slot and header bit + let (oplog_slot, header_bit) = + Oplog::get_next_header_oplog_slot_and_bit_value(¤t_header_bits); + let mut new_header_bits = current_header_bits; + match oplog_slot { + OplogSlot::FirstHeader => new_header_bits[0] = header_bit, + OplogSlot::SecondHeader => new_header_bits[1] = header_bit, + _ => { + panic!("Invalid oplog slot"); + } + } + + // Preencode the new header + (*state).preencode(header)?; + + // If clearing, lets add zeros to the end + let end = if clear_traces { + let end = state.end(); + state.set_end(HEADER_SIZE); + end + } else { + state.end() + }; + + // Create a buffer for the needed data + let mut buffer = state.create_buffer(); + + // Encode the header + (*state).encode(header, &mut buffer)?; + + // Finally prepend the buffer's 8 first bytes with a CRC, len and right bits + Self::prepend_leader( + end - data_start_index, + header_bit, + false, + &mut state, + &mut buffer, + )?; + + // The oplog is always truncated to the minimum byte size, which is right after + // all of the entries in the oplog finish. + let truncate_index = OplogSlot::Entries as u64 + entries_byte_length; + Ok(( + new_header_bits, + vec![ + StoreInfo::new_content(Store::Oplog, oplog_slot as u64, &buffer), + StoreInfo::new_truncate(Store::Oplog, truncate_index), + ] + .into_boxed_slice(), + )) + } + + /// Prepends given `State` with 4 bytes of CRC followed by 4 bytes containing length of + /// following buffer, 1 bit indicating which header is relevant to the entry (or if used to + /// wrap the actual header, then the header bit relevant for saving) and 1 bit that tells if + /// the written batch is only partially finished. For this to work, the state given must have + /// 8 bytes in reserve in the beginning, so that state.start can be set back 8 bytes. + fn prepend_leader( + len: usize, + header_bit: bool, + partial_bit: bool, + state: &mut HypercoreState, + buffer: &mut Box<[u8]>, + ) -> Result<(), HypercoreError> { + // The 4 bytes right before start of data is the length in 8+8+8+6=30 bits. The 31st bit is + // the partial bit and 32nd bit the header bit. + let start = (*state).start(); + (*state).set_start(start - len - 4)?; + let len_u32: u32 = len.try_into().unwrap(); + let partial_bit: u32 = if partial_bit { 2 } else { 0 }; + let header_bit: u32 = if header_bit { 1 } else { 0 }; + let combined: u32 = (len_u32 << 2) | header_bit | partial_bit; + state.encode_u32(combined, buffer)?; + + // Before that, is a 4 byte CRC32 that is a checksum of the above encoded 4 bytes and the + // content. + let start = state.start(); + state.set_start(start - 8)?; + let checksum = crc32fast::hash(&buffer[state.start() + 4..state.start() + 8 + len]); + state.encode_u32(checksum, buffer)?; + Ok(()) + } + + /// Validates that leader at given index is valid, and returns header and partial bits and + /// `State` for the header/entry that the leader was for. + fn validate_leader( + index: usize, + buffer: &[u8], + ) -> Result, HypercoreError> { + if buffer.len() < index + 8 { + return Ok(None); + } + let mut state = HypercoreState::new_with_start_and_end(index, buffer.len()); + let stored_checksum: u32 = state.decode_u32(buffer)?; + let combined: u32 = state.decode_u32(buffer)?; + let len = usize::try_from(combined >> 2) + .expect("Attempted converting to a 32 bit usize on below 32 bit system"); + + // NB: In the Javascript version IIUC zero length is caught only with a mismatch + // of checksums, which is silently interpreted to only mean "no value". That doesn't sound good: + // better to throw an error on mismatch and let the caller at least log the problem. + if len == 0 || state.end() - state.start() < len { + return Ok(None); + } + let header_bit = combined & 1 == 1; + let partial_bit = combined & 2 == 2; + + let new_start = index + 8; + state.set_end(new_start + len); + state.set_start(new_start)?; + + let calculated_checksum = crc32fast::hash(&buffer[index + 4..state.end()]); + if calculated_checksum != stored_checksum { + return Err(HypercoreError::InvalidChecksum { + context: "Calculated signature does not match oplog signature".to_string(), + }); + }; + + Ok(Some(ValidateLeaderOutcome { + header_bit, + partial_bit, + state, + })) + } + + /// Gets the current header bit + fn get_current_header_bit(&self) -> bool { + self.header_bits[0] != self.header_bits[1] + } + + /// Based on given header_bits, determines if saving the header should be done to the first + /// header slot or the second header slot and the bit that it should get. + fn get_next_header_oplog_slot_and_bit_value(header_bits: &[bool; 2]) -> (OplogSlot, bool) { + // Writing a header to the disk is most efficient when only one area is saved. + // This makes it a bit less obvious to find out which of the headers is older + // and which newer. The bits indicate the header slot index in this way: + // + // [true, false] => [false, false] => [false, true] => [true, true] => [true, false] ... + // First => Second => First => Second => First + if header_bits[0] != header_bits[1] { + // First slot + (OplogSlot::FirstHeader, !header_bits[0]) + } else { + // Second slot + (OplogSlot::SecondHeader, !header_bits[1]) + } + } +} diff --git a/vendor/hypercore/src/prelude.rs b/vendor/hypercore/src/prelude.rs new file mode 100644 index 00000000..0dd26ea4 --- /dev/null +++ b/vendor/hypercore/src/prelude.rs @@ -0,0 +1,5 @@ +//! Convenience wrapper to import all of Hypercore's core. +pub use crate::common::{HypercoreError, Store}; +pub use crate::core::Hypercore; +pub use crate::crypto::PartialKeypair; +pub use crate::storage::Storage; diff --git a/vendor/hypercore/src/storage/mod.rs b/vendor/hypercore/src/storage/mod.rs new file mode 100644 index 00000000..7eb3776d --- /dev/null +++ b/vendor/hypercore/src/storage/mod.rs @@ -0,0 +1,274 @@ +//! Save data to a desired storage backend. + +use futures::future::FutureExt; +#[cfg(not(target_arch = "wasm32"))] +use random_access_disk::RandomAccessDisk; +use random_access_memory::RandomAccessMemory; +use random_access_storage::{RandomAccess, RandomAccessError}; +use std::fmt::Debug; +#[cfg(not(target_arch = "wasm32"))] +use std::path::PathBuf; +use tracing::instrument; + +use crate::{ + common::{Store, StoreInfo, StoreInfoInstruction, StoreInfoType}, + HypercoreError, +}; + +/// Save data to a desired storage backend. +#[derive(Debug)] +pub struct Storage +where + T: RandomAccess + Debug, +{ + tree: T, + data: T, + bitfield: T, + oplog: T, +} + +pub(crate) fn map_random_access_err(err: RandomAccessError) -> HypercoreError { + match err { + RandomAccessError::IO { + return_code, + context, + source, + } => HypercoreError::IO { + context: Some(format!( + "RandomAccess IO error. Context: {context:?}, return_code: {return_code:?}", + )), + source, + }, + RandomAccessError::OutOfBounds { + offset, + end, + length, + } => HypercoreError::InvalidOperation { + context: format!( + "RandomAccess out of bounds. Offset: {offset}, end: {end:?}, length: {length}", + ), + }, + } +} + +impl Storage +where + T: RandomAccess + Debug + Send, +{ + /// Create a new instance. Takes a callback to create new storage instances and overwrite flag. + pub async fn open(create: Cb, overwrite: bool) -> Result + where + Cb: Fn( + Store, + ) -> std::pin::Pin< + Box> + Send>, + >, + { + let mut tree = create(Store::Tree).await.map_err(map_random_access_err)?; + let mut data = create(Store::Data).await.map_err(map_random_access_err)?; + let mut bitfield = create(Store::Bitfield) + .await + .map_err(map_random_access_err)?; + let mut oplog = create(Store::Oplog).await.map_err(map_random_access_err)?; + + if overwrite { + if tree.len().await.map_err(map_random_access_err)? > 0 { + tree.truncate(0).await.map_err(map_random_access_err)?; + } + if data.len().await.map_err(map_random_access_err)? > 0 { + data.truncate(0).await.map_err(map_random_access_err)?; + } + if bitfield.len().await.map_err(map_random_access_err)? > 0 { + bitfield.truncate(0).await.map_err(map_random_access_err)?; + } + if oplog.len().await.map_err(map_random_access_err)? > 0 { + oplog.truncate(0).await.map_err(map_random_access_err)?; + } + } + + let instance = Self { + tree, + data, + bitfield, + oplog, + }; + + Ok(instance) + } + + /// Read info from store based on given instruction. Convenience method to `read_infos`. + pub(crate) async fn read_info( + &mut self, + info_instruction: StoreInfoInstruction, + ) -> Result { + let mut infos = self.read_infos_to_vec(&[info_instruction]).await?; + Ok(infos + .pop() + .expect("Should have gotten one info with one instruction")) + } + + /// Read infos from stores based on given instructions + pub(crate) async fn read_infos( + &mut self, + info_instructions: &[StoreInfoInstruction], + ) -> Result, HypercoreError> { + let infos = self.read_infos_to_vec(info_instructions).await?; + Ok(infos.into_boxed_slice()) + } + + /// Reads infos but retains them as a Vec + pub(crate) async fn read_infos_to_vec( + &mut self, + info_instructions: &[StoreInfoInstruction], + ) -> Result, HypercoreError> { + if info_instructions.is_empty() { + return Ok(vec![]); + } + let mut current_store: Store = info_instructions[0].store.clone(); + let mut storage = self.get_random_access(¤t_store); + let mut infos: Vec = Vec::with_capacity(info_instructions.len()); + for instruction in info_instructions.iter() { + if instruction.store != current_store { + current_store = instruction.store.clone(); + storage = self.get_random_access(¤t_store); + } + match instruction.info_type { + StoreInfoType::Content => { + let read_length = match instruction.length { + Some(length) => length, + None => storage.len().await.map_err(map_random_access_err)?, + }; + let read_result = storage.read(instruction.index, read_length).await; + let info: StoreInfo = match read_result { + Ok(buf) => Ok(StoreInfo::new_content( + instruction.store.clone(), + instruction.index, + &buf, + )), + Err(RandomAccessError::OutOfBounds { + offset: _, + end: _, + length, + }) => { + if instruction.allow_miss { + Ok(StoreInfo::new_content_miss( + instruction.store.clone(), + instruction.index, + )) + } else { + Err(HypercoreError::InvalidOperation { + context: format!( + "Could not read from store {}, index {} / length {} is out of bounds for store length {}", + instruction.index, + read_length, + current_store, + length + ), + }) + } + } + Err(e) => Err(map_random_access_err(e)), + }?; + infos.push(info); + } + StoreInfoType::Size => { + let length = storage.len().await.map_err(map_random_access_err)?; + infos.push(StoreInfo::new_size( + instruction.store.clone(), + instruction.index, + length - instruction.index, + )); + } + } + } + Ok(infos) + } + + /// Flush info to storage. Convenience method to `flush_infos`. + pub(crate) async fn flush_info(&mut self, slice: StoreInfo) -> Result<(), HypercoreError> { + self.flush_infos(&[slice]).await + } + + /// Flush infos to storage + pub(crate) async fn flush_infos(&mut self, infos: &[StoreInfo]) -> Result<(), HypercoreError> { + if infos.is_empty() { + return Ok(()); + } + let mut current_store: Store = infos[0].store.clone(); + let mut storage = self.get_random_access(¤t_store); + for info in infos.iter() { + if info.store != current_store { + current_store = info.store.clone(); + storage = self.get_random_access(¤t_store); + } + match info.info_type { + StoreInfoType::Content => { + if !info.miss { + if let Some(data) = &info.data { + storage + .write(info.index, data) + .await + .map_err(map_random_access_err)?; + } + } else { + storage + .del( + info.index, + info.length.expect("When deleting, length must be given"), + ) + .await + .map_err(map_random_access_err)?; + } + } + StoreInfoType::Size => { + if info.miss { + storage + .truncate(info.index) + .await + .map_err(map_random_access_err)?; + } else { + panic!("Flushing a size that isn't miss, is not supported"); + } + } + } + } + Ok(()) + } + + fn get_random_access(&mut self, store: &Store) -> &mut T { + match store { + Store::Tree => &mut self.tree, + Store::Data => &mut self.data, + Store::Bitfield => &mut self.bitfield, + Store::Oplog => &mut self.oplog, + } + } +} + +impl Storage { + /// New storage backed by a `RandomAccessMemory` instance. + #[instrument(err)] + pub async fn new_memory() -> Result { + let create = |_| async { Ok(RandomAccessMemory::default()) }.boxed(); + // No reason to overwrite, as this is a new memory segment + Self::open(create, false).await + } +} + +#[cfg(not(target_arch = "wasm32"))] +impl Storage { + /// New storage backed by a `RandomAccessDisk` instance. + #[instrument(err)] + pub async fn new_disk(dir: &PathBuf, overwrite: bool) -> Result { + let storage = |store: Store| { + let name = match store { + Store::Tree => "tree", + Store::Data => "data", + Store::Bitfield => "bitfield", + Store::Oplog => "oplog", + }; + RandomAccessDisk::open(dir.as_path().join(name)).boxed() + }; + Self::open(storage, overwrite).await + } +} diff --git a/vendor/hypercore/src/tree/merkle_tree.rs b/vendor/hypercore/src/tree/merkle_tree.rs new file mode 100644 index 00000000..c9579199 --- /dev/null +++ b/vendor/hypercore/src/tree/merkle_tree.rs @@ -0,0 +1,1616 @@ +use compact_encoding::State; +use ed25519_dalek::Signature; +use futures::future::Either; +use intmap::IntMap; +#[cfg(feature = "cache")] +use moka::sync::Cache; +use std::convert::TryFrom; + +#[cfg(feature = "cache")] +use crate::common::cache::CacheOptions; +use crate::common::{HypercoreError, NodeByteRange, Proof, ValuelessProof}; +use crate::crypto::Hash; +use crate::oplog::HeaderTree; +use crate::{ + common::{StoreInfo, StoreInfoInstruction}, + Node, VerifyingKey, +}; +use crate::{ + DataBlock, DataHash, DataSeek, DataUpgrade, RequestBlock, RequestSeek, RequestUpgrade, Store, +}; + +use super::MerkleTreeChangeset; + +/// Merkle tree. +/// See https://github.com/hypercore-protocol/hypercore/blob/master/lib/merkle-tree.js +#[derive(Debug)] +pub(crate) struct MerkleTree { + pub(crate) roots: Vec, + pub(crate) length: u64, + pub(crate) byte_length: u64, + pub(crate) fork: u64, + pub(crate) signature: Option, + unflushed: IntMap, + truncated: bool, + truncate_to: u64, + #[cfg(feature = "cache")] + node_cache: Option>, +} + +const NODE_SIZE: u64 = 40; + +impl MerkleTree { + /// Opens MerkleTree, based on read infos. + pub(crate) fn open( + header_tree: &HeaderTree, + infos: Option<&[StoreInfo]>, + #[cfg(feature = "cache")] node_cache_options: &Option, + ) -> Result, Self>, HypercoreError> { + match infos { + None => { + let root_indices = get_root_indices(&header_tree.length); + + Ok(Either::Left( + root_indices + .iter() + .map(|&index| { + StoreInfoInstruction::new_content( + Store::Tree, + NODE_SIZE * index, + NODE_SIZE, + ) + }) + .collect::>() + .into_boxed_slice(), + )) + } + Some(infos) => { + let root_indices = get_root_indices(&header_tree.length); + + let mut roots: Vec = Vec::with_capacity(infos.len()); + let mut byte_length: u64 = 0; + let mut length: u64 = 0; + + for i in 0..root_indices.len() { + let index = root_indices[i]; + if index != index_from_info(&infos[i]) { + return Err(HypercoreError::CorruptStorage { + store: Store::Tree, + context: Some( + "Given slices vector not in the correct order".to_string(), + ), + }); + } + let data = infos[i].data.as_ref().unwrap(); + let node = node_from_bytes(&index, data)?; + byte_length += node.length; + // This is totalSpan in Javascript + length += 2 * ((node.index - length) + 1); + + roots.push(node); + } + if length > 0 { + length /= 2; + } + let signature: Option = if header_tree.signature.len() > 0 { + Some( + Signature::try_from(&*header_tree.signature).map_err(|_err| { + HypercoreError::InvalidSignature { + context: "Could not parse signature".to_string(), + } + })?, + ) + } else { + None + }; + + Ok(Either::Right(Self { + #[cfg(feature = "cache")] + node_cache: node_cache_options + .as_ref() + .map(|opts| opts.to_node_cache(roots.clone())), + roots, + length, + byte_length, + fork: header_tree.fork, + unflushed: IntMap::new(), + truncated: false, + truncate_to: 0, + signature, + })) + } + } + } + + /// Initialize a changeset for this tree. + /// This is called batch() in Javascript, see: + /// https://github.com/hypercore-protocol/hypercore/blob/master/lib/merkle-tree.js + pub(crate) fn changeset(&self) -> MerkleTreeChangeset { + MerkleTreeChangeset::new(self.length, self.byte_length, self.fork, self.roots.clone()) + } + + /// Commit a created changeset to the tree. + pub(crate) fn commit(&mut self, changeset: MerkleTreeChangeset) -> Result<(), HypercoreError> { + if !self.commitable(&changeset) { + return Err(HypercoreError::InvalidOperation { + context: "Tree was modified during changeset, refusing to commit".to_string(), + }); + } + + if changeset.upgraded { + self.commit_truncation(&changeset); + + self.roots = changeset.roots; + self.length = changeset.length; + self.byte_length = changeset.byte_length; + self.fork = changeset.fork; + self.signature = changeset.signature; + } + + for node in changeset.nodes { + self.unflushed.insert(node.index, node); + } + + Ok(()) + } + + /// Flush committed made changes to the tree + pub(crate) fn flush(&mut self) -> Box<[StoreInfo]> { + let mut infos_to_flush: Vec = Vec::new(); + if self.truncated { + infos_to_flush.extend(self.flush_truncation()); + } + infos_to_flush.extend(self.flush_nodes()); + infos_to_flush.into_boxed_slice() + } + + /// Get storage byte range of given hypercore index + pub(crate) fn byte_range( + &mut self, + hypercore_index: u64, + infos: Option<&[StoreInfo]>, + ) -> Result, NodeByteRange>, HypercoreError> { + let index = self.validate_hypercore_index(hypercore_index)?; + // Get nodes out of incoming infos + let nodes: IntMap> = self.infos_to_nodes(infos)?; + + // Start with getting the requested node, which will get the length + // of the byte range + let length_result = self.required_node(index, &nodes)?; + + // As for the offset, that might require fetching a lot more nodes whose + // lengths to sum + let offset_result = self.byte_offset_from_nodes(index, &nodes)?; + + // Construct response of either instructions (Left) or the result (Right) + let mut instructions: Vec = Vec::new(); + let mut byte_range = NodeByteRange { + index: 0, + length: 0, + }; + match length_result { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => { + byte_range.length = node.length; + } + } + match offset_result { + Either::Left(offset_instructions) => { + instructions.extend(offset_instructions); + } + Either::Right(offset) => { + byte_range.index = offset; + } + } + + if instructions.is_empty() { + Ok(Either::Right(byte_range)) + } else { + Ok(Either::Left(instructions.into_boxed_slice())) + } + } + + /// Get the byte offset given hypercore index + pub(crate) fn byte_offset( + &mut self, + hypercore_index: u64, + infos: Option<&[StoreInfo]>, + ) -> Result, u64>, HypercoreError> { + let index = self.validate_hypercore_index(hypercore_index)?; + self.byte_offset_from_index(index, infos) + } + + /// Get the byte offset of hypercore index in a changeset + pub(crate) fn byte_offset_in_changeset( + &mut self, + hypercore_index: u64, + changeset: &MerkleTreeChangeset, + infos: Option<&[StoreInfo]>, + ) -> Result, u64>, HypercoreError> { + if self.length == hypercore_index { + return Ok(Either::Right(self.byte_length)); + } + let index = hypercore_index_into_merkle_tree_index(hypercore_index); + let mut iter = flat_tree::Iterator::new(index); + let mut tree_offset = 0; + let mut is_right = false; + let mut parent: Option = None; + for node in &changeset.nodes { + if node.index == iter.index() { + if is_right { + if let Some(parent) = parent { + tree_offset += node.length - parent.length; + } + } + parent = Some(node.clone()); + is_right = iter.is_right(); + iter.parent(); + } + } + + let search_index = if let Some(parent) = parent { + let r = changeset + .roots + .iter() + .position(|root| root.index == parent.index); + if let Some(r) = r { + for i in 0..r { + tree_offset += self.roots[i].length; + } + return Ok(Either::Right(tree_offset)); + } + parent.index + } else { + index + }; + + match self.byte_offset_from_index(search_index, infos)? { + Either::Left(instructions) => Ok(Either::Left(instructions)), + Either::Right(offset) => Ok(Either::Right(offset + tree_offset)), + } + } + + pub(crate) fn add_node(&mut self, node: Node) { + self.unflushed.insert(node.index, node); + } + + pub(crate) fn truncate( + &mut self, + length: u64, + fork: u64, + infos: Option<&[StoreInfo]>, + ) -> Result, MerkleTreeChangeset>, HypercoreError> { + let head = length * 2; + let mut full_roots = vec![]; + flat_tree::full_roots(head, &mut full_roots); + let nodes: IntMap> = self.infos_to_nodes(infos)?; + let mut changeset = self.changeset(); + + let mut instructions: Vec = Vec::new(); + for (i, root) in full_roots.iter().enumerate() { + if i < changeset.roots.len() && changeset.roots[i].index == *root { + continue; + } + while changeset.roots.len() > i { + changeset.roots.pop(); + } + + let node_or_instruction = self.required_node(*root, &nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => { + changeset.roots.push(node); + } + } + } + + if instructions.is_empty() { + while changeset.roots.len() > full_roots.len() { + changeset.roots.pop(); + } + changeset.fork = fork; + changeset.length = length; + changeset.ancestors = length; + changeset.byte_length = changeset + .roots + .iter() + .fold(0, |acc, node| acc + node.length); + changeset.upgraded = true; + Ok(Either::Right(changeset)) + } else { + Ok(Either::Left(instructions.into_boxed_slice())) + } + } + + /// Creates valueless proof from requests. + /// TODO: This is now just a clone of javascript's + /// https://github.com/holepunchto/hypercore/blob/9ce03363cb8938dbab53baba7d7cc9dde0508a7e/lib/merkle-tree.js#L1181 + /// The implementation should be rewritten to make it clearer. + pub(crate) fn create_valueless_proof( + &mut self, + block: Option<&RequestBlock>, + hash: Option<&RequestBlock>, + seek: Option<&RequestSeek>, + upgrade: Option<&RequestUpgrade>, + infos: Option<&[StoreInfo]>, + ) -> Result, ValuelessProof>, HypercoreError> { + let nodes: IntMap> = self.infos_to_nodes(infos)?; + let mut instructions: Vec = Vec::new(); + let fork = self.fork; + let signature = self.signature; + let head = 2 * self.length; + let (from, to) = if let Some(upgrade) = upgrade.as_ref() { + let from = upgrade.start * 2; + (from, from + upgrade.length * 2) + } else { + (0, head) + }; + let indexed = normalize_indexed(block, hash); + + if from >= to || to > head { + return Err(HypercoreError::InvalidOperation { + context: "Invalid upgrade".to_string(), + }); + } + + let mut sub_tree = head; + let mut p = LocalProof { + seek: None, + nodes: None, + upgrade: None, + additional_upgrade: None, + }; + let mut untrusted_sub_tree = false; + if let Some(indexed) = indexed.as_ref() { + if seek.is_some() && upgrade.is_some() && indexed.index >= from { + return Err(HypercoreError::InvalidOperation { + context: "Cannot both do a seek and block/hash request when upgrading" + .to_string(), + }); + } + + if let Some(upgrade) = upgrade.as_ref() { + untrusted_sub_tree = indexed.last_index < upgrade.start; + } else { + untrusted_sub_tree = true; + } + + if untrusted_sub_tree { + sub_tree = nodes_to_root(indexed.index, indexed.nodes, to)?; + let seek_root = if let Some(seek) = seek.as_ref() { + let index_or_instructions = + self.seek_untrusted_tree(sub_tree, seek.bytes, &nodes)?; + match index_or_instructions { + Either::Left(new_instructions) => { + instructions.extend(new_instructions); + return Ok(Either::Left(instructions.into_boxed_slice())); + } + Either::Right(index) => index, + } + } else { + head + }; + if let Either::Left(new_instructions) = self.block_and_seek_proof( + Some(indexed), + seek.is_some(), + seek_root, + sub_tree, + &mut p, + &nodes, + )? { + instructions.extend(new_instructions); + } + } else if upgrade.is_some() { + sub_tree = indexed.index; + } + } + if !untrusted_sub_tree { + if let Some(seek) = seek.as_ref() { + let index_or_instructions = self.seek_from_head(to, seek.bytes, &nodes)?; + sub_tree = match index_or_instructions { + Either::Left(new_instructions) => { + instructions.extend(new_instructions); + return Ok(Either::Left(instructions.into_boxed_slice())); + } + Either::Right(index) => index, + }; + } + } + + if upgrade.is_some() { + if let Either::Left(new_instructions) = self.upgrade_proof( + indexed.as_ref(), + seek.is_some(), + from, + to, + sub_tree, + &mut p, + &nodes, + )? { + instructions.extend(new_instructions); + } + + if head > to { + if let Either::Left(new_instructions) = + self.additional_upgrade_proof(to, head, &mut p, &nodes)? + { + instructions.extend(new_instructions); + } + } + } + + if instructions.is_empty() { + let (data_block, data_hash): (Option, Option) = + if let Some(block) = block.as_ref() { + ( + Some(DataHash { + index: block.index, + nodes: p.nodes.expect("nodes need to be present"), + }), + None, + ) + } else if let Some(hash) = hash.as_ref() { + ( + None, + Some(DataHash { + index: hash.index, + nodes: p.nodes.expect("nodes need to be set"), + }), + ) + } else { + (None, None) + }; + + let data_seek: Option = if let Some(seek) = seek.as_ref() { + p.seek.map(|p_seek| DataSeek { + bytes: seek.bytes, + nodes: p_seek, + }) + } else { + None + }; + + let data_upgrade: Option = if let Some(upgrade) = upgrade.as_ref() { + Some(DataUpgrade { + start: upgrade.start, + length: upgrade.length, + nodes: p.upgrade.expect("nodes need to be set"), + additional_nodes: if let Some(additional_upgrade) = p.additional_upgrade { + additional_upgrade + } else { + vec![] + }, + signature: signature + .expect("signature needs to be set") + .to_bytes() + .to_vec(), + }) + } else { + None + }; + + Ok(Either::Right(ValuelessProof { + fork, + block: data_block, + hash: data_hash, + seek: data_seek, + upgrade: data_upgrade, + })) + } else { + Ok(Either::Left(instructions.into_boxed_slice())) + } + } + + /// Verifies a proof received from a peer. + pub(crate) fn verify_proof( + &mut self, + proof: &Proof, + public_key: &VerifyingKey, + infos: Option<&[StoreInfo]>, + ) -> Result, MerkleTreeChangeset>, HypercoreError> { + let nodes: IntMap> = self.infos_to_nodes(infos)?; + let mut instructions: Vec = Vec::new(); + let mut changeset = self.changeset(); + + let mut unverified_block_root_node = verify_tree( + proof.block.as_ref(), + proof.hash.as_ref(), + proof.seek.as_ref(), + &mut changeset, + )?; + if let Some(upgrade) = proof.upgrade.as_ref() { + if verify_upgrade( + proof.fork, + upgrade, + unverified_block_root_node.as_ref(), + public_key, + &mut changeset, + )? { + unverified_block_root_node = None; + } + } + + if let Some(unverified_block_root_node) = unverified_block_root_node { + let node_or_instruction = + self.required_node(unverified_block_root_node.index, &nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(verified_block_root_node) => { + if verified_block_root_node.hash != unverified_block_root_node.hash { + return Err(HypercoreError::InvalidChecksum { + context: format!( + "Invalid checksum at node {}, store {}", + unverified_block_root_node.index, + Store::Tree + ), + }); + } + } + } + } + + if instructions.is_empty() { + Ok(Either::Right(changeset)) + } else { + Ok(Either::Left(instructions.into_boxed_slice())) + } + } + + /// Attempts to get missing nodes from given index. NB: must be called in a loop. + pub(crate) fn missing_nodes( + &mut self, + index: u64, + infos: Option<&[StoreInfo]>, + ) -> Result, u64>, HypercoreError> { + let head = 2 * self.length; + let mut iter = flat_tree::Iterator::new(index); + let iter_right_span = iter.index() + iter.factor() / 2 - 1; + + // If the index is not in the current tree, we do not know how many missing nodes there are... + if iter_right_span >= head { + return Ok(Either::Right(0)); + } + + let nodes: IntMap> = self.infos_to_nodes(infos)?; + let mut count: u64 = 0; + while !iter.contains(head) { + match self.optional_node(iter.index(), &nodes)? { + Either::Left(instruction) => { + return Ok(Either::Left(vec![instruction].into_boxed_slice())); + } + Either::Right(value) => { + if value.is_none() { + count += 1; + iter.parent(); + } else { + break; + } + } + } + } + Ok(Either::Right(count)) + } + + /// Is the changeset commitable to given tree + pub(crate) fn commitable(&self, changeset: &MerkleTreeChangeset) -> bool { + let correct_length: bool = if changeset.upgraded { + changeset.original_tree_length == self.length + } else { + changeset.original_tree_length <= self.length + }; + changeset.original_tree_fork == self.fork && correct_length + } + + fn commit_truncation(&mut self, changeset: &MerkleTreeChangeset) { + if changeset.ancestors < changeset.original_tree_length { + if changeset.ancestors > 0 { + let head = 2 * changeset.ancestors; + let mut iter = flat_tree::Iterator::new(head - 2); + loop { + let index = iter.index(); + if iter.contains(head) && index < head { + self.unflushed.insert(index, Node::new_blank(index)); + } + + if iter.offset() == 0 { + break; + } + iter.parent(); + } + } + + self.truncate_to = if self.truncated { + std::cmp::min(self.truncate_to, changeset.ancestors) + } else { + changeset.ancestors + }; + + self.truncated = true; + let mut unflushed_indices_to_delete: Vec = Vec::new(); + for node in self.unflushed.iter() { + if *node.0 >= 2 * changeset.ancestors { + unflushed_indices_to_delete.push(*node.0); + } + } + for index_to_delete in unflushed_indices_to_delete { + self.unflushed.remove(index_to_delete); + } + } + } + + pub(crate) fn flush_truncation(&mut self) -> Vec { + let offset = if self.truncate_to == 0 { + 0 + } else { + (self.truncate_to - 1) * 80 + 40 + }; + self.truncate_to = 0; + self.truncated = false; + vec![StoreInfo::new_truncate(Store::Tree, offset)] + } + + pub(crate) fn flush_nodes(&mut self) -> Vec { + let mut infos_to_flush: Vec = Vec::with_capacity(self.unflushed.len()); + for (_, node) in self.unflushed.drain() { + let (mut state, mut buffer) = State::new_with_size(40); + state + .encode_u64(node.length, &mut buffer) + .expect("Encoding u64 should not fail"); + state + .encode_fixed_32(&node.hash, &mut buffer) + .expect("Encoding fixed 32 bytes should not fail"); + infos_to_flush.push(StoreInfo::new_content( + Store::Tree, + node.index * 40, + &buffer, + )); + } + infos_to_flush + } + + /// Validates given hypercore index and returns tree index + fn validate_hypercore_index(&self, hypercore_index: u64) -> Result { + // Converts a hypercore index into a merkle tree index + let index = hypercore_index_into_merkle_tree_index(hypercore_index); + + // Check bounds + let head = 2 * self.length; + let compare_index = if index & 1 == 0 { + index + } else { + flat_tree::right_span(index) + }; + if compare_index >= head { + return Err(HypercoreError::BadArgument { + context: format!("Hypercore index {hypercore_index} is out of bounds"), + }); + } + Ok(index) + } + + fn byte_offset_from_index( + &mut self, + index: u64, + infos: Option<&[StoreInfo]>, + ) -> Result, u64>, HypercoreError> { + // Get nodes out of incoming infos + let nodes: IntMap> = self.infos_to_nodes(infos)?; + // Get offset + let offset_result = self.byte_offset_from_nodes(index, &nodes)?; + // Get offset + match offset_result { + Either::Left(offset_instructions) => { + Ok(Either::Left(offset_instructions.into_boxed_slice())) + } + Either::Right(offset) => Ok(Either::Right(offset)), + } + } + + fn byte_offset_from_nodes( + &self, + index: u64, + nodes: &IntMap>, + ) -> Result, u64>, HypercoreError> { + let index = if (index & 1) == 1 { + flat_tree::left_span(index) + } else { + index + }; + let mut head: u64 = 0; + let mut offset: u64 = 0; + + for root_node in &self.roots { + head += 2 * ((root_node.index - head) + 1); + + if index >= head { + offset += root_node.length; + continue; + } + let mut iter = flat_tree::Iterator::new(root_node.index); + + let mut instructions: Vec = Vec::new(); + while iter.index() != index { + if index < iter.index() { + iter.left_child(); + } else { + let left_child = iter.left_child(); + let node_or_instruction = self.required_node(left_child, nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => { + offset += node.length; + } + } + iter.sibling(); + } + } + return if instructions.is_empty() { + Ok(Either::Right(offset)) + } else { + Ok(Either::Left(instructions)) + }; + } + + Err(HypercoreError::BadArgument { + context: format!("Could not calculate byte offset for index {index}"), + }) + } + + fn required_node( + &self, + index: u64, + nodes: &IntMap>, + ) -> Result, HypercoreError> { + match self.node(index, nodes, false)? { + Either::Left(value) => Ok(Either::Left(value)), + Either::Right(node) => { + if let Some(node) = node { + Ok(Either::Right(node)) + } else { + Err(HypercoreError::InvalidOperation { + context: format!("Node at {} is required, store {}", index, Store::Tree), + }) + } + } + } + } + + fn optional_node( + &self, + index: u64, + nodes: &IntMap>, + ) -> Result>, HypercoreError> { + self.node(index, nodes, true) + } + + fn node( + &self, + index: u64, + nodes: &IntMap>, + allow_miss: bool, + ) -> Result>, HypercoreError> { + // First check the cache + #[cfg(feature = "cache")] + if let Some(node_cache) = &self.node_cache { + if let Some(node) = node_cache.get(&index) { + return Ok(Either::Right(Some(node))); + } + } + + // Then check if unflushed has the node + if let Some(node) = self.unflushed.get(index) { + if node.blank || (self.truncated && node.index >= 2 * self.truncate_to) { + // The node is either blank or being deleted + return if allow_miss { + Ok(Either::Right(None)) + } else { + Err(HypercoreError::InvalidOperation { + context: format!( + "Could not load node: {}, store {}, unflushed", + index, + Store::Tree + ), + }) + }; + } + return Ok(Either::Right(Some(node.clone()))); + } + + // Then check if it's in the incoming nodes + let result = nodes.get(index); + if let Some(node_maybe) = result { + if let Some(node) = node_maybe { + if node.blank { + return if allow_miss { + Ok(Either::Right(None)) + } else { + Err(HypercoreError::InvalidOperation { + context: format!( + "Could not load node: {}, store {}, blank", + index, + Store::Tree + ), + }) + }; + } + return Ok(Either::Right(Some(node.clone()))); + } else if allow_miss { + return Ok(Either::Right(None)); + } else { + return Err(HypercoreError::InvalidOperation { + context: format!( + "Could not load node: {}, store {}, empty", + index, + Store::Tree + ), + }); + } + } + + // If not, return an instruction + let offset = 40 * index; + let length = 40; + let info = if allow_miss { + StoreInfoInstruction::new_content_allow_miss(Store::Tree, offset, length) + } else { + StoreInfoInstruction::new_content(Store::Tree, offset, length) + }; + Ok(Either::Left(info)) + } + + fn seek_from_head( + &self, + head: u64, + bytes: u64, + nodes: &IntMap>, + ) -> Result, u64>, HypercoreError> { + let mut instructions: Vec = Vec::new(); + let mut roots = vec![]; + flat_tree::full_roots(head, &mut roots); + let mut bytes = bytes; + + for root in roots { + let node_or_instruction = self.required_node(root, nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => { + if bytes == node.length { + return Ok(Either::Right(root)); + } + if bytes > node.length { + bytes -= node.length; + continue; + } + let instructions_or_result = self.seek_trusted_tree(root, bytes, nodes)?; + return match instructions_or_result { + Either::Left(new_instructions) => { + instructions.extend(new_instructions); + Ok(Either::Left(instructions)) + } + Either::Right(index) => Ok(Either::Right(index)), + }; + } + } + } + + if instructions.is_empty() { + Ok(Either::Right(head)) + } else { + Ok(Either::Left(instructions)) + } + } + + /// Trust that bytes are within the root tree and find the block at bytes. + fn seek_trusted_tree( + &self, + root: u64, + bytes: u64, + nodes: &IntMap>, + ) -> Result, u64>, HypercoreError> { + if bytes == 0 { + return Ok(Either::Right(root)); + } + let mut iter = flat_tree::Iterator::new(root); + let mut instructions: Vec = Vec::new(); + let mut bytes = bytes; + while iter.index() & 1 != 0 { + let node_or_instruction = self.optional_node(iter.left_child(), nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + // Need to break immediately because it is unknown + // if this node is the one that will match. This means + // this function needs to be called in a loop where incoming + // nodes increase with each call. + break; + } + Either::Right(node) => { + if let Some(node) = node { + if node.length == bytes { + return Ok(Either::Right(iter.index())); + } + if node.length > bytes { + continue; + } + bytes -= node.length; + iter.sibling(); + } else { + iter.parent(); + return Ok(Either::Right(iter.index())); + } + } + } + } + if instructions.is_empty() { + Ok(Either::Right(iter.index())) + } else { + Ok(Either::Left(instructions)) + } + } + + /// Try to find the block at bytes without trusting that it *is* within the root passed. + fn seek_untrusted_tree( + &self, + root: u64, + bytes: u64, + nodes: &IntMap>, + ) -> Result, u64>, HypercoreError> { + let mut instructions: Vec = Vec::new(); + let offset_or_instructions = self.byte_offset_from_nodes(root, nodes)?; + let mut bytes = bytes; + match offset_or_instructions { + Either::Left(new_instructions) => { + instructions.extend(new_instructions); + } + Either::Right(offset) => { + if offset > bytes { + return Err(HypercoreError::InvalidOperation { + context: "Invalid seek, wrong offset".to_string(), + }); + } + if offset == bytes { + return Ok(Either::Right(root)); + } + bytes -= offset; + let node_or_instruction = self.required_node(root, nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => { + if node.length <= bytes { + return Err(HypercoreError::InvalidOperation { + context: "Invalid seek, wrong length".to_string(), + }); + } + } + } + } + } + let instructions_or_result = self.seek_trusted_tree(root, bytes, nodes)?; + match instructions_or_result { + Either::Left(new_instructions) => { + instructions.extend(new_instructions); + Ok(Either::Left(instructions)) + } + Either::Right(index) => Ok(Either::Right(index)), + } + } + + fn block_and_seek_proof( + &self, + indexed: Option<&NormalizedIndexed>, + is_seek: bool, + seek_root: u64, + root: u64, + p: &mut LocalProof, + nodes: &IntMap>, + ) -> Result, ()>, HypercoreError> { + if let Some(indexed) = indexed { + let mut iter = flat_tree::Iterator::new(indexed.index); + let mut instructions: Vec = Vec::new(); + let mut p_nodes: Vec = Vec::new(); + + if !indexed.value { + let node_or_instruction = self.required_node(iter.index(), nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => { + p_nodes.push(node); + } + } + } + while iter.index() != root { + iter.sibling(); + if is_seek && iter.contains(seek_root) && iter.index() != seek_root { + let success_or_instruction = + self.seek_proof(seek_root, iter.index(), p, nodes)?; + if let Either::Left(new_instructions) = success_or_instruction { + instructions.extend(new_instructions); + } + } else { + let node_or_instruction = self.required_node(iter.index(), nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => { + p_nodes.push(node); + } + } + } + + iter.parent(); + } + p.nodes = Some(p_nodes); + if instructions.is_empty() { + Ok(Either::Right(())) + } else { + Ok(Either::Left(instructions)) + } + } else { + self.seek_proof(seek_root, root, p, nodes) + } + } + + fn seek_proof( + &self, + seek_root: u64, + root: u64, + p: &mut LocalProof, + nodes: &IntMap>, + ) -> Result, ()>, HypercoreError> { + let mut iter = flat_tree::Iterator::new(seek_root); + let mut instructions: Vec = Vec::new(); + let mut seek_nodes: Vec = Vec::new(); + let node_or_instruction = self.required_node(iter.index(), nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => { + seek_nodes.push(node); + } + } + + while iter.index() != root { + iter.sibling(); + let node_or_instruction = self.required_node(iter.index(), nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => { + seek_nodes.push(node); + } + } + iter.parent(); + } + p.seek = Some(seek_nodes); + if instructions.is_empty() { + Ok(Either::Right(())) + } else { + Ok(Either::Left(instructions)) + } + } + + #[allow(clippy::too_many_arguments)] + fn upgrade_proof( + &self, + indexed: Option<&NormalizedIndexed>, + is_seek: bool, + from: u64, + to: u64, + sub_tree: u64, + p: &mut LocalProof, + nodes: &IntMap>, + ) -> Result, ()>, HypercoreError> { + let mut instructions: Vec = Vec::new(); + let mut upgrade: Vec = Vec::new(); + let mut has_upgrade = false; + + if from == 0 { + has_upgrade = true; + } + + let mut iter = flat_tree::Iterator::new(0); + let mut has_full_root = iter.full_root(to); + while has_full_root { + // check if they already have the node + if iter.index() + iter.factor() / 2 < from { + iter.next_tree(); + has_full_root = iter.full_root(to); + continue; + } + + // connect existing tree + if !has_upgrade && iter.contains(from - 2) { + has_upgrade = true; + let root = iter.index(); + let target = from - 2; + + iter.seek(target); + + while iter.index() != root { + iter.sibling(); + if iter.index() > target { + if p.nodes.is_none() && p.seek.is_none() && iter.contains(sub_tree) { + let success_or_instructions = self.block_and_seek_proof( + indexed, + is_seek, + sub_tree, + iter.index(), + p, + nodes, + )?; + if let Either::Left(new_instructions) = success_or_instructions { + instructions.extend(new_instructions); + } + } else { + let node_or_instruction = self.required_node(iter.index(), nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => upgrade.push(node), + } + } + } + iter.parent(); + } + + iter.next_tree(); + has_full_root = iter.full_root(to); + continue; + } + + if !has_upgrade { + has_upgrade = true; + } + + // if the subtree included is a child of this tree, include that one + // instead of a dup node + if p.nodes.is_none() && p.seek.is_none() && iter.contains(sub_tree) { + let success_or_instructions = + self.block_and_seek_proof(indexed, is_seek, sub_tree, iter.index(), p, nodes)?; + if let Either::Left(new_instructions) = success_or_instructions { + instructions.extend(new_instructions); + } + iter.next_tree(); + has_full_root = iter.full_root(to); + continue; + } + + // add root (can be optimised since the root might be in tree.roots) + let node_or_instruction = self.required_node(iter.index(), nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => upgrade.push(node), + } + + iter.next_tree(); + has_full_root = iter.full_root(to); + } + + if has_upgrade { + p.upgrade = Some(upgrade); + } + + if instructions.is_empty() { + Ok(Either::Right(())) + } else { + Ok(Either::Left(instructions)) + } + } + + fn additional_upgrade_proof( + &self, + from: u64, + to: u64, + p: &mut LocalProof, + nodes: &IntMap>, + ) -> Result, ()>, HypercoreError> { + let mut instructions: Vec = Vec::new(); + let mut additional_upgrade: Vec = Vec::new(); + let mut has_additional_upgrade = false; + + if from == 0 { + has_additional_upgrade = true; + } + + let mut iter = flat_tree::Iterator::new(0); + let mut has_full_root = iter.full_root(to); + while has_full_root { + // check if they already have the node + if iter.index() + iter.factor() / 2 < from { + iter.next_tree(); + has_full_root = iter.full_root(to); + continue; + } + + // connect existing tree + if !has_additional_upgrade && iter.contains(from - 2) { + has_additional_upgrade = true; + let root = iter.index(); + let target = from - 2; + + iter.seek(target); + + while iter.index() != root { + iter.sibling(); + if iter.index() > target { + let node_or_instruction = self.required_node(iter.index(), nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => additional_upgrade.push(node), + } + } + iter.parent(); + } + + iter.next_tree(); + has_full_root = iter.full_root(to); + continue; + } + + if !has_additional_upgrade { + has_additional_upgrade = true; + } + + // add root (can be optimised since the root is in tree.roots) + let node_or_instruction = self.required_node(iter.index(), nodes)?; + match node_or_instruction { + Either::Left(instruction) => { + instructions.push(instruction); + } + Either::Right(node) => additional_upgrade.push(node), + } + + iter.next_tree(); + has_full_root = iter.full_root(to); + } + + if has_additional_upgrade { + p.additional_upgrade = Some(additional_upgrade); + } + + if instructions.is_empty() { + Ok(Either::Right(())) + } else { + Ok(Either::Left(instructions)) + } + } + + fn infos_to_nodes( + &mut self, + infos: Option<&[StoreInfo]>, + ) -> Result>, HypercoreError> { + match infos { + Some(infos) => { + let mut nodes: IntMap> = IntMap::with_capacity(infos.len()); + for info in infos { + let index = index_from_info(info); + if !info.miss { + let node = node_from_bytes(&index, info.data.as_ref().unwrap())?; + #[cfg(feature = "cache")] + if !node.blank { + if let Some(node_cache) = &self.node_cache { + node_cache.insert(node.index, node.clone()) + } + } + nodes.insert(index, Some(node)); + } else { + nodes.insert(index, None); + } + } + Ok(nodes) + } + None => Ok(IntMap::new()), + } + } +} + +/// Converts a hypercore index into a merkle tree index. In the flat tree +/// representation, the leaves are in the even numbers, and the parents +/// odd. That's why we need to double the hypercore index value to get +/// the right merkle tree index. +fn hypercore_index_into_merkle_tree_index(hypercore_index: u64) -> u64 { + 2 * hypercore_index +} + +fn verify_tree( + block: Option<&DataBlock>, + hash: Option<&DataHash>, + seek: Option<&DataSeek>, + changeset: &mut MerkleTreeChangeset, +) -> Result, HypercoreError> { + let untrusted_node: Option = normalize_data(block, hash); + + if untrusted_node.is_none() { + let no_seek = if let Some(seek) = seek.as_ref() { + seek.nodes.is_empty() + } else { + true + }; + if no_seek { + return Ok(None); + } + } + + let mut root: Option = None; + + if let Some(seek) = seek { + if !seek.nodes.is_empty() { + let mut iter = flat_tree::Iterator::new(seek.nodes[0].index); + let mut q = NodeQueue::new(seek.nodes.clone(), None); + let node = q.shift(iter.index())?; + let mut current_root: Node = node.clone(); + changeset.nodes.push(node); + while q.length > 0 { + let node = q.shift(iter.sibling())?; + let parent_node = parent_node(iter.parent(), ¤t_root, &node); + current_root = parent_node.clone(); + changeset.nodes.push(node); + changeset.nodes.push(parent_node); + } + root = Some(current_root); + } + } + + if let Some(untrusted_node) = untrusted_node { + let mut iter = flat_tree::Iterator::new(untrusted_node.index); + + let mut q = NodeQueue::new(untrusted_node.nodes, root); + let node: Node = if let Some(value) = untrusted_node.value { + block_node(iter.index(), &value) + } else { + q.shift(iter.index())? + }; + let mut current_root = node.clone(); + changeset.nodes.push(node); + while q.length > 0 { + let node = q.shift(iter.sibling())?; + let parent_node = parent_node(iter.parent(), ¤t_root, &node); + current_root = parent_node.clone(); + changeset.nodes.push(node); + changeset.nodes.push(parent_node); + } + root = Some(current_root); + } + Ok(root) +} + +fn verify_upgrade( + fork: u64, + upgrade: &DataUpgrade, + block_root: Option<&Node>, + public_key: &VerifyingKey, + changeset: &mut MerkleTreeChangeset, +) -> Result { + let mut q = if let Some(block_root) = block_root { + NodeQueue::new(upgrade.nodes.clone(), Some(block_root.clone())) + } else { + NodeQueue::new(upgrade.nodes.clone(), None) + }; + let mut grow: bool = !changeset.roots.is_empty(); + let mut i: usize = 0; + let to: u64 = 2 * (upgrade.start + upgrade.length); + let mut iter = flat_tree::Iterator::new(0); + while iter.full_root(to) { + if i < changeset.roots.len() && changeset.roots[i].index == iter.index() { + i += 1; + iter.next_tree(); + continue; + } + if grow { + grow = false; + let root_index = iter.index(); + if i < changeset.roots.len() { + iter.seek(changeset.roots[changeset.roots.len() - 1].index); + while iter.index() != root_index { + changeset.append_root(q.shift(iter.sibling())?, &mut iter); + } + iter.next_tree(); + continue; + } + } + changeset.append_root(q.shift(iter.index())?, &mut iter); + iter.next_tree(); + } + let extra = &upgrade.additional_nodes; + + iter.seek(changeset.roots[changeset.roots.len() - 1].index); + i = 0; + + while i < extra.len() && extra[i].index == iter.sibling() { + changeset.append_root(extra[i].clone(), &mut iter); + i += 1; + } + + while i < extra.len() { + let node = extra[i].clone(); + i += 1; + while node.index != iter.index() { + if iter.factor() == 2 { + return Err(HypercoreError::InvalidOperation { + context: format!("Unexpected node: {}, store: {}", node.index, Store::Tree), + }); + } + iter.left_child(); + } + changeset.append_root(node, &mut iter); + iter.sibling(); + } + changeset.fork = fork; + changeset.verify_and_set_signature(&upgrade.signature, public_key)?; + Ok(q.extra.is_none()) +} + +fn get_root_indices(header_tree_length: &u64) -> Vec { + let mut roots = vec![]; + flat_tree::full_roots(header_tree_length * 2, &mut roots); + roots +} + +fn index_from_info(info: &StoreInfo) -> u64 { + info.index / NODE_SIZE +} + +fn node_from_bytes(index: &u64, data: &[u8]) -> Result { + let len_buf = &data[..8]; + let hash = &data[8..]; + let mut state = State::from_buffer(len_buf); + let len = state.decode_u64(len_buf)?; + Ok(Node::new(*index, hash.to_vec(), len)) +} + +#[derive(Debug, Copy, Clone)] +struct NormalizedIndexed { + value: bool, + index: u64, + nodes: u64, + last_index: u64, +} + +fn normalize_indexed( + block: Option<&RequestBlock>, + hash: Option<&RequestBlock>, +) -> Option { + if let Some(block) = block { + Some(NormalizedIndexed { + value: true, + index: block.index * 2, + nodes: block.nodes, + last_index: block.index, + }) + } else { + hash.map(|hash| NormalizedIndexed { + value: false, + index: hash.index, + nodes: hash.nodes, + last_index: flat_tree::right_span(hash.index) / 2, + }) + } +} + +#[derive(Debug, Clone)] +struct NormalizedData { + value: Option>, + index: u64, + nodes: Vec, +} + +fn normalize_data(block: Option<&DataBlock>, hash: Option<&DataHash>) -> Option { + if let Some(block) = block { + Some(NormalizedData { + value: Some(block.value.clone()), + index: block.index * 2, + nodes: block.nodes.clone(), + }) + } else { + hash.map(|hash| NormalizedData { + value: None, + index: hash.index, + nodes: hash.nodes.clone(), + }) + } +} + +/// Struct to use for local building of proof +#[derive(Debug, Clone)] +struct LocalProof { + seek: Option>, + nodes: Option>, + upgrade: Option>, + additional_upgrade: Option>, +} + +fn nodes_to_root(index: u64, nodes: u64, head: u64) -> Result { + let mut iter = flat_tree::Iterator::new(index); + for _ in 0..nodes { + iter.parent(); + if iter.contains(head) { + return Err(HypercoreError::InvalidOperation { + context: format!( + "Nodes is out of bounds, index: {index}, nodes: {nodes}, head {head}" + ), + }); + } + } + Ok(iter.index()) +} + +fn parent_node(index: u64, left: &Node, right: &Node) -> Node { + Node::new( + index, + Hash::parent(left, right).as_bytes().to_vec(), + left.length + right.length, + ) +} + +fn block_node(index: u64, value: &Vec) -> Node { + Node::new( + index, + Hash::data(value).as_bytes().to_vec(), + value.len() as u64, + ) +} + +/// Node queue +struct NodeQueue { + i: usize, + nodes: Vec, + extra: Option, + length: usize, +} +impl NodeQueue { + fn new(nodes: Vec, extra: Option) -> Self { + let length = nodes.len() + if extra.is_some() { 1 } else { 0 }; + Self { + i: 0, + nodes, + extra, + length, + } + } + fn shift(&mut self, index: u64) -> Result { + if let Some(extra) = self.extra.take() { + if extra.index == index { + self.length -= 1; + return Ok(extra); + } else { + self.extra = Some(extra); + } + } + if self.i >= self.nodes.len() { + return Err(HypercoreError::InvalidOperation { + context: format!("Expected node {index}, got (nil)"), + }); + } + let node = self.nodes[self.i].clone(); + self.i += 1; + if node.index != index { + return Err(HypercoreError::InvalidOperation { + context: format!("Expected node {index}, got node {}", node.index), + }); + } + self.length -= 1; + Ok(node) + } +} diff --git a/vendor/hypercore/src/tree/merkle_tree_changeset.rs b/vendor/hypercore/src/tree/merkle_tree_changeset.rs new file mode 100644 index 00000000..be28873f --- /dev/null +++ b/vendor/hypercore/src/tree/merkle_tree_changeset.rs @@ -0,0 +1,131 @@ +use ed25519_dalek::{Signature, SigningKey, VerifyingKey}; +use std::convert::TryFrom; + +use crate::{ + crypto::{signable_tree, verify, Hash}, + sign, HypercoreError, Node, +}; + +/// Changeset for a `MerkleTree`. This allows to incrementally change a `MerkleTree` in two steps: +/// first create the changes to this changeset, get out information from this to put to the oplog, +/// and the commit the changeset to the tree. +/// +/// This is called "MerkleTreeBatch" in Javascript, see: +/// https://github.com/hypercore-protocol/hypercore/blob/master/lib/merkle-tree.js +#[derive(Debug)] +pub(crate) struct MerkleTreeChangeset { + pub(crate) length: u64, + pub(crate) ancestors: u64, + pub(crate) byte_length: u64, + pub(crate) batch_length: u64, + pub(crate) fork: u64, + pub(crate) roots: Vec, + pub(crate) nodes: Vec, + pub(crate) hash: Option>, + pub(crate) signature: Option, + pub(crate) upgraded: bool, + + // Safeguarding values + pub(crate) original_tree_length: u64, + pub(crate) original_tree_fork: u64, +} + +impl MerkleTreeChangeset { + pub(crate) fn new( + length: u64, + byte_length: u64, + fork: u64, + roots: Vec, + ) -> MerkleTreeChangeset { + Self { + length, + ancestors: length, + byte_length, + batch_length: 0, + fork, + roots, + nodes: vec![], + hash: None, + signature: None, + upgraded: false, + original_tree_length: length, + original_tree_fork: fork, + } + } + + pub(crate) fn append(&mut self, data: &[u8]) -> usize { + let len = data.len(); + let head = self.length * 2; + let mut iter = flat_tree::Iterator::new(head); + let node = Node::new(head, Hash::data(data).as_bytes().to_vec(), len as u64); + self.append_root(node, &mut iter); + self.batch_length += 1; + len + } + + pub(crate) fn append_root(&mut self, node: Node, iter: &mut flat_tree::Iterator) { + self.upgraded = true; + self.length += iter.factor() / 2; + self.byte_length += node.length; + self.roots.push(node.clone()); + self.nodes.push(node); + + while self.roots.len() > 1 { + let a = &self.roots[self.roots.len() - 1]; + let b = &self.roots[self.roots.len() - 2]; + if iter.sibling() != b.index { + iter.sibling(); // unset so it always points to last root + break; + } + + let node = Node::new( + iter.parent(), + Hash::parent(a, b).as_bytes().into(), + a.length + b.length, + ); + let _ = &self.nodes.push(node.clone()); + let _ = &self.roots.pop(); + let _ = &self.roots.pop(); + let _ = &self.roots.push(node); + } + } + + /// Hashes and signs the changeset + pub(crate) fn hash_and_sign(&mut self, signing_key: &SigningKey) { + let hash = self.hash(); + let signable = self.signable(&hash); + let signature = sign(signing_key, &signable); + self.hash = Some(hash); + self.signature = Some(signature); + } + + /// Verify and set signature with given public key + pub(crate) fn verify_and_set_signature( + &mut self, + signature: &[u8], + public_key: &VerifyingKey, + ) -> Result<(), HypercoreError> { + // Verify that the received signature matches the public key + let signature = + Signature::try_from(signature).map_err(|_| HypercoreError::InvalidSignature { + context: "Could not parse signature".to_string(), + })?; + let hash = self.hash(); + verify(public_key, &self.signable(&hash), Some(&signature))?; + + // Set values to changeset + self.hash = Some(hash); + self.signature = Some(signature); + Ok(()) + } + + /// Calculates a hash of the current set of roots + pub(crate) fn hash(&self) -> Box<[u8]> { + Hash::tree(&self.roots).as_bytes().into() + } + + /// Creates a signable slice from given hash + pub(crate) fn signable(&self, hash: &[u8]) -> Box<[u8]> { + signable_tree(hash, self.length, self.fork) + } +} diff --git a/vendor/hypercore/src/tree/mod.rs b/vendor/hypercore/src/tree/mod.rs new file mode 100644 index 00000000..02367a2a --- /dev/null +++ b/vendor/hypercore/src/tree/mod.rs @@ -0,0 +1,5 @@ +mod merkle_tree; +mod merkle_tree_changeset; + +pub(crate) use merkle_tree::MerkleTree; +pub(crate) use merkle_tree_changeset::MerkleTreeChangeset; diff --git a/vendor/hypercore/tests/common/mod.rs b/vendor/hypercore/tests/common/mod.rs new file mode 100644 index 00000000..fbe8616c --- /dev/null +++ b/vendor/hypercore/tests/common/mod.rs @@ -0,0 +1,102 @@ +use anyhow::Result; +use ed25519_dalek::{SigningKey, VerifyingKey, PUBLIC_KEY_LENGTH, SECRET_KEY_LENGTH}; +use random_access_disk::RandomAccessDisk; +use sha2::{Digest, Sha256}; +use std::io::prelude::*; +use std::path::Path; + +use hypercore::{Hypercore, HypercoreBuilder, PartialKeypair, Storage}; + +const TEST_PUBLIC_KEY_BYTES: [u8; PUBLIC_KEY_LENGTH] = [ + 0x97, 0x60, 0x6c, 0xaa, 0xd2, 0xb0, 0x8c, 0x1d, 0x5f, 0xe1, 0x64, 0x2e, 0xee, 0xa5, 0x62, 0xcb, + 0x91, 0xd6, 0x55, 0xe2, 0x00, 0xc8, 0xd4, 0x3a, 0x32, 0x09, 0x1d, 0x06, 0x4a, 0x33, 0x1e, 0xe3, +]; +// NB: In the javascript version this is 64 bytes, but that's because sodium appends the the public +// key after the secret key for some reason. Only the first 32 bytes are actually used in +// javascript side too for signing. +const TEST_SECRET_KEY_BYTES: [u8; SECRET_KEY_LENGTH] = [ + 0x27, 0xe6, 0x74, 0x25, 0xc1, 0xff, 0xd1, 0xd9, 0xee, 0x62, 0x5c, 0x96, 0x2b, 0x57, 0x13, 0xc3, + 0x51, 0x0b, 0x71, 0x14, 0x15, 0xf3, 0x31, 0xf6, 0xfa, 0x9e, 0xf2, 0xbf, 0x23, 0x5f, 0x2f, 0xfe, +]; + +#[derive(PartialEq, Debug)] +pub struct HypercoreHash { + pub bitfield: Option, + pub data: Option, + pub oplog: Option, + pub tree: Option, +} + +pub fn get_test_key_pair() -> PartialKeypair { + let public = VerifyingKey::from_bytes(&TEST_PUBLIC_KEY_BYTES).unwrap(); + let signing_key = SigningKey::from_bytes(&TEST_SECRET_KEY_BYTES); + assert_eq!(public.to_bytes(), signing_key.verifying_key().to_bytes()); + let secret = Some(signing_key); + PartialKeypair { public, secret } +} + +pub async fn create_hypercore(work_dir: &str) -> Result> { + let path = Path::new(work_dir).to_owned(); + let key_pair = get_test_key_pair(); + let storage = Storage::new_disk(&path, true).await?; + Ok(HypercoreBuilder::new(storage) + .key_pair(key_pair) + .build() + .await?) +} + +pub async fn open_hypercore(work_dir: &str) -> Result> { + let path = Path::new(work_dir).to_owned(); + let storage = Storage::new_disk(&path, false).await?; + Ok(HypercoreBuilder::new(storage).open(true).build().await?) +} + +pub fn create_hypercore_hash(dir: &str) -> HypercoreHash { + let bitfield = hash_file(format!("{dir}/bitfield")); + let data = hash_file(format!("{dir}/data")); + let oplog = hash_file(format!("{dir}/oplog")); + let tree = hash_file(format!("{dir}/tree")); + HypercoreHash { + bitfield, + data, + oplog, + tree, + } +} + +pub fn hash_file(file: String) -> Option { + let path = std::path::Path::new(&file); + if !path.exists() { + None + } else { + let mut hasher = Sha256::new(); + let mut file = std::fs::File::open(path).unwrap(); + std::io::copy(&mut file, &mut hasher).unwrap(); + let hash_bytes = hasher.finalize(); + let hash = format!("{hash_bytes:X}"); + // Empty file has this hash, don't make a difference between missing and empty file. Rust + // is much easier and performant to write if the empty file is created. + if hash == *"E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855" { + None + } else { + Some(format!("{hash_bytes:X}")) + } + } +} + +pub fn storage_contains_data(dir: &Path, data: &[u8]) -> bool { + for file_name in ["bitfield", "data", "oplog", "tree"] { + let file_path = dir.join(file_name); + let mut file = std::fs::File::open(file_path).unwrap(); + let mut buffer = Vec::new(); + file.read_to_end(&mut buffer).unwrap(); + if is_sub(&buffer, data) { + return true; + } + } + false +} + +fn is_sub(haystack: &[T], needle: &[T]) -> bool { + haystack.windows(needle.len()).any(|c| c == needle) +} diff --git a/vendor/hypercore/tests/core.rs b/vendor/hypercore/tests/core.rs new file mode 100644 index 00000000..f3e8d2ec --- /dev/null +++ b/vendor/hypercore/tests/core.rs @@ -0,0 +1,79 @@ +pub mod common; + +use anyhow::Result; +use common::{create_hypercore, get_test_key_pair, open_hypercore, storage_contains_data}; +use hypercore::{HypercoreBuilder, Storage}; +use tempfile::Builder; +use test_log::test; + +#[cfg(feature = "async-std")] +use async_std::test as async_test; +#[cfg(feature = "tokio")] +use tokio::test as async_test; + +#[test(async_test)] +async fn hypercore_new() -> Result<()> { + let storage = Storage::new_memory().await?; + let _hypercore = HypercoreBuilder::new(storage).build(); + Ok(()) +} + +#[test(async_test)] +async fn hypercore_new_with_key_pair() -> Result<()> { + let storage = Storage::new_memory().await?; + let key_pair = get_test_key_pair(); + let _hypercore = HypercoreBuilder::new(storage) + .key_pair(key_pair) + .build() + .await?; + Ok(()) +} + +#[test(async_test)] +async fn hypercore_open_with_key_pair_error() -> Result<()> { + let storage = Storage::new_memory().await?; + let key_pair = get_test_key_pair(); + assert!(HypercoreBuilder::new(storage) + .key_pair(key_pair) + .open(true) + .build() + .await + .is_err()); + Ok(()) +} + +#[test(async_test)] +async fn hypercore_make_read_only() -> Result<()> { + let dir = Builder::new() + .prefix("hypercore_make_read_only") + .tempdir() + .unwrap(); + let write_key_pair = { + let mut hypercore = create_hypercore(&dir.path().to_string_lossy()).await?; + hypercore.append(b"Hello").await?; + hypercore.append(b"World!").await?; + hypercore.key_pair().clone() + }; + assert!(storage_contains_data( + dir.path(), + &write_key_pair.secret.as_ref().unwrap().to_bytes() + )); + assert!(write_key_pair.secret.is_some()); + let read_key_pair = { + let mut hypercore = open_hypercore(&dir.path().to_string_lossy()).await?; + assert_eq!(&hypercore.get(0).await?.unwrap(), b"Hello"); + assert_eq!(&hypercore.get(1).await?.unwrap(), b"World!"); + assert!(hypercore.make_read_only().await?); + hypercore.key_pair().clone() + }; + assert!(read_key_pair.secret.is_none()); + assert!(!storage_contains_data( + dir.path(), + &write_key_pair.secret.as_ref().unwrap().to_bytes()[16..], + )); + + let mut hypercore = open_hypercore(&dir.path().to_string_lossy()).await?; + assert_eq!(&hypercore.get(0).await?.unwrap(), b"Hello"); + assert_eq!(&hypercore.get(1).await?.unwrap(), b"World!"); + Ok(()) +} diff --git a/vendor/hypercore/tests/js/interop.js b/vendor/hypercore/tests/js/interop.js new file mode 100644 index 00000000..59e9f373 --- /dev/null +++ b/vendor/hypercore/tests/js/interop.js @@ -0,0 +1,128 @@ +const Hypercore = require('hypercore'); + +// Static test key pair obtained with: +// +// const crypto = require('hypercore-crypto'); +// const keyPair = crypto.keyPair(); +// console.log("public key", keyPair.publicKey.toString('hex').match(/../g).join(' ')); +// console.log("secret key", keyPair.secretKey.toString('hex').match(/../g).join(' ')); +const testKeyPair = { + publicKey: Buffer.from([ + 0x97, 0x60, 0x6c, 0xaa, 0xd2, 0xb0, 0x8c, 0x1d, 0x5f, 0xe1, 0x64, 0x2e, 0xee, 0xa5, 0x62, 0xcb, + 0x91, 0xd6, 0x55, 0xe2, 0x00, 0xc8, 0xd4, 0x3a, 0x32, 0x09, 0x1d, 0x06, 0x4a, 0x33, 0x1e, 0xe3]), + secretKey: Buffer.from([ + 0x27, 0xe6, 0x74, 0x25, 0xc1, 0xff, 0xd1, 0xd9, 0xee, 0x62, 0x5c, 0x96, 0x2b, 0x57, 0x13, 0xc3, + 0x51, 0x0b, 0x71, 0x14, 0x15, 0xf3, 0x31, 0xf6, 0xfa, 0x9e, 0xf2, 0xbf, 0x23, 0x5f, 0x2f, 0xfe, + 0x97, 0x60, 0x6c, 0xaa, 0xd2, 0xb0, 0x8c, 0x1d, 0x5f, 0xe1, 0x64, 0x2e, 0xee, 0xa5, 0x62, 0xcb, + 0x91, 0xd6, 0x55, 0xe2, 0x00, 0xc8, 0xd4, 0x3a, 0x32, 0x09, 0x1d, 0x06, 0x4a, 0x33, 0x1e, 0xe3]), +} + +if (process.argv.length !== 4) { + console.error("Usage: node interop.js [test step] [test set]") + process.exit(1); +} + +if (process.argv[2] === '1') { + step1Create(process.argv[3]).then(result => { + console.log("step1 ready", result); + }); +} else if (process.argv[2] === '2'){ + step2AppendHelloWorld(process.argv[3]).then(result => { + console.log("step2 ready", result); + }); +} else if (process.argv[2] === '3'){ + step3ReadAndAppendUnflushed(process.argv[3]).then(result => { + console.log("step3 ready", result); + }); +} else if (process.argv[2] === '4'){ + step4AppendWithFlush(process.argv[3]).then(result => { + console.log("step4 ready", result); + }); +} else if (process.argv[2] === '5'){ + step5ClearSome(process.argv[3]).then(result => { + console.log("step5 ready", result); + }); +} else { + console.error(`Invalid test step {}`, process.argv[2]); + process.exit(2); +} + +async function step1Create(testSet) { + const core = new Hypercore(`work/${testSet}`, testKeyPair.publicKey, {keyPair: testKeyPair}); + await core.close(); +}; + +async function step2AppendHelloWorld(testSet) { + const core = new Hypercore(`work/${testSet}`, testKeyPair.publicKey, {keyPair: testKeyPair}); + const result = await core.append([Buffer.from('Hello'), Buffer.from('World')]); + assert(result.length, 2); + assert(result.byteLength, 10); + await core.close(); +}; + +async function step3ReadAndAppendUnflushed(testSet) { + const core = new Hypercore(`work/${testSet}`, testKeyPair.publicKey, {keyPair: testKeyPair}); + const hello = (await core.get(0)).toString(); + const world = (await core.get(1)).toString(); + assert(hello, "Hello"); + assert(world, "World"); + let result = await core.append(Buffer.from('first')); + assert(result.length, 3); + assert(result.byteLength, 15); + result = await core.append([Buffer.from('second'), Buffer.from('third')]); + assert(result.length, 5); + assert(result.byteLength, 26); + const multiBlock = Buffer.alloc(4096*3, 'a'); + result = await core.append(multiBlock); + assert(result.length, 6); + assert(result.byteLength, 12314); + result = await core.append([]); + assert(result.length, 6); + assert(result.byteLength, 12314); + const first = (await core.get(2)).toString(); + assert(first, "first"); + const second = (await core.get(3)).toString(); + assert(second, "second"); + const third = (await core.get(4)).toString(); + assert(third, "third"); + const multiBlockRead = await core.get(5); + if (!multiBlockRead.equals(multiBlock)) { + throw new Error(`Read buffers don't equal, ${multiBlockRead} but expected ${multiBlock}`); + } + await core.close(); +}; + +async function step4AppendWithFlush(testSet) { + const core = new Hypercore(`work/${testSet}`, testKeyPair.publicKey, {keyPair: testKeyPair}); + for (let i=0; i<5; i++) { + result = await core.append(Buffer.from([i])); + assert(result.length, 6+i+1); + assert(result.byteLength, 12314+i+1); + } +} + +async function step5ClearSome(testSet) { + const core = new Hypercore(`work/${testSet}`, testKeyPair.publicKey, {keyPair: testKeyPair}); + await core.clear(5); + await core.clear(7, 9); + let info = await core.info(); + assert(info.length, 11); + assert(info.byteLength, 12319); + assert(info.contiguousLength, 5); + assert(info.padding, 0); + + let missing = await core.get(5, { wait: false }); + assert(missing, null); + missing = await core.get(7, { wait: false }); + assert(missing, null); + missing = await core.get(8, { wait: false }); + assert(missing, null); + const third = (await core.get(4)).toString(); + assert(third, "third"); +} + +function assert(real, expected) { + if (real != expected) { + throw new Error(`Got ${real} but expected ${expected}`); + } +} diff --git a/vendor/hypercore/tests/js/mod.rs b/vendor/hypercore/tests/js/mod.rs new file mode 100644 index 00000000..b0da51d4 --- /dev/null +++ b/vendor/hypercore/tests/js/mod.rs @@ -0,0 +1,50 @@ +use std::fs::{create_dir_all, remove_dir_all, remove_file}; +use std::path::Path; +use std::process::Command; + +pub fn cleanup() { + if Path::new("tests/js/node_modules").exists() { + remove_dir_all("tests/js/node_modules").expect("Unable to run rm to delete node_modules"); + } + + if Path::new("tests/js/work").exists() { + remove_dir_all("tests/js/work").expect("Unable to run rm to delete work"); + } + if Path::new("tests/js/package-lock.json").exists() { + remove_file("tests/js/package-lock.json") + .expect("Unable to run rm to delete package-lock.json"); + } +} + +pub fn install() { + let status = Command::new("npm") + .current_dir("tests/js") + .args(["install"]) + .status() + .expect("Unable to run npm install"); + assert_eq!( + Some(0), + status.code(), + "npm install did not run successfully. Do you have npm installed and a network connection?" + ); +} + +pub fn prepare_test_set(test_set: &str) -> String { + let path = format!("tests/js/work/{}", test_set); + create_dir_all(&path).expect("Unable to create work directory"); + path +} + +pub fn js_run_step(step: u8, test_set: &str) { + let status = Command::new("npm") + .current_dir("tests/js") + .args(["run", "step", &step.to_string(), test_set]) + .status() + .expect("Unable to run npm run"); + assert_eq!( + Some(0), + status.code(), + "node step {} did not run successfully", + step + ); +} diff --git a/vendor/hypercore/tests/js/package.json b/vendor/hypercore/tests/js/package.json new file mode 100644 index 00000000..2c5db7da --- /dev/null +++ b/vendor/hypercore/tests/js/package.json @@ -0,0 +1,10 @@ +{ + "name": "hypercore-js-interop-tests", + "version": "0.0.1", + "scripts": { + "step": "node interop.js" + }, + "dependencies": { + "hypercore": "^10" + } +} diff --git a/vendor/hypercore/tests/js_interop.rs b/vendor/hypercore/tests/js_interop.rs new file mode 100644 index 00000000..5d02d737 --- /dev/null +++ b/vendor/hypercore/tests/js_interop.rs @@ -0,0 +1,192 @@ +pub mod common; +pub mod js; +use std::sync::Once; + +use anyhow::Result; +use common::{create_hypercore, create_hypercore_hash, open_hypercore}; +use js::{cleanup, install, js_run_step, prepare_test_set}; +use test_log::test; + +#[cfg(feature = "async-std")] +use async_std::test as async_test; +#[cfg(feature = "tokio")] +use tokio::test as async_test; + +const TEST_SET_JS_FIRST: &str = "jsfirst"; +const TEST_SET_RS_FIRST: &str = "rsfirst"; + +static INIT: Once = Once::new(); +fn init() { + INIT.call_once(|| { + // run initialization here + cleanup(); + install(); + }); +} + +#[test(async_test)] +#[cfg_attr(not(feature = "js_interop_tests"), ignore)] +async fn js_interop_js_first() -> Result<()> { + init(); + let work_dir = prepare_test_set(TEST_SET_JS_FIRST); + assert_eq!(create_hypercore_hash(&work_dir), step_0_hash()); + js_run_step(1, TEST_SET_JS_FIRST); + assert_eq!(create_hypercore_hash(&work_dir), step_1_hash()); + step_2_append_hello_world(&work_dir).await?; + assert_eq!(create_hypercore_hash(&work_dir), step_2_hash()); + js_run_step(3, TEST_SET_JS_FIRST); + assert_eq!(create_hypercore_hash(&work_dir), step_3_hash()); + step_4_append_with_flush(&work_dir).await?; + assert_eq!(create_hypercore_hash(&work_dir), step_4_hash()); + js_run_step(5, TEST_SET_JS_FIRST); + assert_eq!(create_hypercore_hash(&work_dir), step_5_hash()); + Ok(()) +} + +#[test(async_test)] +#[cfg_attr(not(feature = "js_interop_tests"), ignore)] +async fn js_interop_rs_first() -> Result<()> { + init(); + let work_dir = prepare_test_set(TEST_SET_RS_FIRST); + assert_eq!(create_hypercore_hash(&work_dir), step_0_hash()); + step_1_create(&work_dir).await?; + assert_eq!(create_hypercore_hash(&work_dir), step_1_hash()); + js_run_step(2, TEST_SET_RS_FIRST); + assert_eq!(create_hypercore_hash(&work_dir), step_2_hash()); + step_3_read_and_append_unflushed(&work_dir).await?; + assert_eq!(create_hypercore_hash(&work_dir), step_3_hash()); + js_run_step(4, TEST_SET_RS_FIRST); + assert_eq!(create_hypercore_hash(&work_dir), step_4_hash()); + step_5_clear_some(&work_dir).await?; + assert_eq!(create_hypercore_hash(&work_dir), step_5_hash()); + Ok(()) +} + +async fn step_1_create(work_dir: &str) -> Result<()> { + create_hypercore(work_dir).await?; + Ok(()) +} + +async fn step_2_append_hello_world(work_dir: &str) -> Result<()> { + let mut hypercore = open_hypercore(work_dir).await?; + let batch: &[&[u8]] = &[b"Hello", b"World"]; + let append_outcome = hypercore.append_batch(batch).await?; + assert_eq!(append_outcome.length, 2); + assert_eq!(append_outcome.byte_length, 10); + Ok(()) +} + +async fn step_3_read_and_append_unflushed(work_dir: &str) -> Result<()> { + let mut hypercore = open_hypercore(work_dir).await?; + let hello = hypercore.get(0).await?; + assert_eq!(hello.unwrap(), b"Hello"); + let world = hypercore.get(1).await?; + assert_eq!(world.unwrap(), b"World"); + let append_outcome = hypercore.append(b"first").await?; + assert_eq!(append_outcome.length, 3); + assert_eq!(append_outcome.byte_length, 15); + let batch: &[&[u8]] = &[b"second", b"third"]; + let append_outcome = hypercore.append_batch(batch).await?; + assert_eq!(append_outcome.length, 5); + assert_eq!(append_outcome.byte_length, 26); + let multi_block = &[0x61_u8; 4096 * 3]; + let append_outcome = hypercore.append(multi_block).await?; + assert_eq!(append_outcome.length, 6); + assert_eq!(append_outcome.byte_length, 12314); + let batch: Vec> = vec![]; + let append_outcome = hypercore.append_batch(&batch).await?; + assert_eq!(append_outcome.length, 6); + assert_eq!(append_outcome.byte_length, 12314); + let first = hypercore.get(2).await?; + assert_eq!(first.unwrap(), b"first"); + let second = hypercore.get(3).await?; + assert_eq!(second.unwrap(), b"second"); + let third = hypercore.get(4).await?; + assert_eq!(third.unwrap(), b"third"); + let multi_block_read = hypercore.get(5).await?; + assert_eq!(multi_block_read.unwrap(), multi_block); + Ok(()) +} + +async fn step_4_append_with_flush(work_dir: &str) -> Result<()> { + let mut hypercore = open_hypercore(work_dir).await?; + for i in 0..5 { + let append_outcome = hypercore.append(&[i]).await?; + assert_eq!(append_outcome.length, (6 + i + 1) as u64); + assert_eq!(append_outcome.byte_length, (12314 + i as u64 + 1)); + } + Ok(()) +} + +async fn step_5_clear_some(work_dir: &str) -> Result<()> { + let mut hypercore = open_hypercore(work_dir).await?; + hypercore.clear(5, 6).await?; + hypercore.clear(7, 9).await?; + let info = hypercore.info(); + assert_eq!(info.length, 11); + assert_eq!(info.byte_length, 12319); + assert_eq!(info.contiguous_length, 5); + let missing = hypercore.get(5).await?; + assert_eq!(missing, None); + let missing = hypercore.get(7).await?; + assert_eq!(missing, None); + let missing = hypercore.get(8).await?; + assert_eq!(missing, None); + let third = hypercore.get(4).await?; + assert_eq!(third.unwrap(), b"third"); + Ok(()) +} + +fn step_0_hash() -> common::HypercoreHash { + common::HypercoreHash { + bitfield: None, + data: None, + oplog: None, + tree: None, + } +} + +fn step_1_hash() -> common::HypercoreHash { + common::HypercoreHash { + bitfield: None, + data: None, + oplog: Some("A30BD5326139E8650F3D53CB43291945AE92796ABAEBE1365AC1B0C37D008936".into()), + tree: None, + } +} + +fn step_2_hash() -> common::HypercoreHash { + common::HypercoreHash { + bitfield: Some("0E2E1FF956A39192CBB68D2212288FE75B32733AB0C442B9F0471E254A0382A2".into()), + data: Some("872E4E50CE9990D8B041330C47C9DDD11BEC6B503AE9386A99DA8584E9BB12C4".into()), + oplog: Some("C65A6867991D29FCF98B4E4549C1039CB5B3C63D891BA1EA4F0BB47211BA4B05".into()), + tree: Some("8577B24ADC763F65D562CD11204F938229AD47F27915B0821C46A0470B80813A".into()), + } +} + +fn step_3_hash() -> common::HypercoreHash { + common::HypercoreHash { + bitfield: Some("DEC1593A7456C8C9407B9B8B9C89682DFFF33C3892BCC9D9F06956FEE0A1B949".into()), + data: Some("99EB5BC150A1102A7E50D15F90594660010B7FE719D54129065D1D417AA5015A".into()), + oplog: Some("5DCE3C7C86B0E129B32E5A07CA3DF668006A42F9D75399D6E4DB3F18256B8468".into()), + tree: Some("38788609A8634DC8D34F9AE723F3169ADB20768ACFDFF266A43B7E217750DD1E".into()), + } +} + +fn step_4_hash() -> common::HypercoreHash { + common::HypercoreHash { + bitfield: Some("9B844E9378A7D13D6CDD4C1FF12FB313013E5CC472C6CB46497033563FE6B8F1".into()), + data: Some("AF3AC31CFBE1733C62496CF8E856D5F1EFB4B06CBF1E74204221C89E2F3E1CDE".into()), + oplog: Some("46E01E9CECDF6E7EA85807F65C5F3CEED96583F3BF97BC6835A6DA05E39FE8E9".into()), + tree: Some("26339A21D606A1F731B90E8001030651D48378116B06A9C1EF87E2538194C2C6".into()), + } +} + +fn step_5_hash() -> common::HypercoreHash { + common::HypercoreHash { + bitfield: Some("40C9CED82AE0B7A397C9FDD14EEB7F70B74E8F1229F3ED931852591972DDC3E0".into()), + data: Some("D9FFCCEEE9109751F034ECDAE328672956B90A6E0B409C3173741B8A5D0E75AB".into()), + oplog: Some("803384F10871FB60E53A7F833E6E1E9729C6D040D960164077963092BBEBA274".into()), + tree: Some("26339A21D606A1F731B90E8001030651D48378116B06A9C1EF87E2538194C2C6".into()), + } +} diff --git a/vendor/hypercore/tests/model.rs b/vendor/hypercore/tests/model.rs new file mode 100644 index 00000000..e6a52fed --- /dev/null +++ b/vendor/hypercore/tests/model.rs @@ -0,0 +1,127 @@ +pub mod common; + +use proptest::prelude::*; +use proptest::test_runner::FileFailurePersistence; +use proptest_derive::Arbitrary; + +const MAX_FILE_SIZE: u64 = 50000; + +#[derive(Clone, Debug, Arbitrary)] +enum Op { + Get { + #[proptest(strategy(index_strategy))] + index: u64, + }, + Append { + #[proptest(regex(data_regex))] + data: Vec, + }, + Clear { + #[proptest(strategy(divisor_strategy))] + len_divisor_for_start: u8, + #[proptest(strategy(divisor_strategy))] + len_divisor_for_length: u8, + }, +} + +fn index_strategy() -> impl Strategy { + 0..MAX_FILE_SIZE +} + +fn divisor_strategy() -> impl Strategy { + 1_u8..17_u8 +} + +fn data_regex() -> &'static str { + // Write 0..5000 byte chunks of ASCII characters as dummy data + "([ -~]{1,1}\n){0,5000}" +} + +proptest! { + #![proptest_config(ProptestConfig { + failure_persistence: Some(Box::new(FileFailurePersistence::WithSource("regressions"))), + ..Default::default() + })] + + #[test] + #[cfg(feature = "async-std")] + fn implementation_matches_model(ops: Vec) { + assert!(async_std::task::block_on(assert_implementation_matches_model(ops))); + } + + #[test] + #[cfg(feature = "tokio")] + fn implementation_matches_model(ops: Vec) { + let rt = tokio::runtime::Runtime::new().unwrap(); + assert!(rt.block_on(async { + assert_implementation_matches_model(ops).await + })); + } +} + +async fn assert_implementation_matches_model(ops: Vec) -> bool { + use hypercore::{HypercoreBuilder, Storage}; + + let storage = Storage::new_memory() + .await + .expect("Memory storage creation should be successful"); + let mut hypercore = HypercoreBuilder::new(storage) + .build() + .await + .expect("Hypercore creation should be successful"); + + let mut model: Vec>> = vec![]; + + for op in ops { + match op { + Op::Append { data } => { + hypercore + .append(&data) + .await + .expect("Append should be successful"); + model.push(Some(data)); + } + Op::Get { index } => { + let data = hypercore + .get(index) + .await + .expect("Get should be successful"); + if index >= hypercore.info().length { + assert_eq!(data, None); + } else { + assert_eq!(data, model[index as usize].clone()); + } + } + Op::Clear { + len_divisor_for_start, + len_divisor_for_length, + } => { + let start = { + let result = model.len() as u64 / len_divisor_for_start as u64; + if result == model.len() as u64 { + if !model.is_empty() { + result - 1 + } else { + 0 + } + } else { + result + } + }; + let length = model.len() as u64 / len_divisor_for_length as u64; + let end = start + length; + let model_end = if end < model.len() as u64 { + end + } else { + model.len() as u64 + }; + hypercore + .clear(start, end) + .await + .expect("Clear should be successful"); + model[start as usize..model_end as usize].fill(None); + } + } + } + true +} diff --git a/vendor/hypercore/vendor b/vendor/hypercore/vendor new file mode 120000 index 00000000..668c8896 --- /dev/null +++ b/vendor/hypercore/vendor @@ -0,0 +1 @@ +../../vendor \ No newline at end of file