Bitcoin SV: Build on the only public blockchain designed ...

A Glance at the Heart: Proof-of-Authority Technology in the UMI Network

A Glance at the Heart: Proof-of-Authority Technology in the UMI Network

https://preview.redd.it/vhvj6v093df51.jpg?width=1024&format=pjpg&auto=webp&s=00c0c223d9758edec8ed49a8cb9024f96d3ee343
Greetings from the UMI Team! Our Whitepaper describes in detail the key pros and cons of the two mechanisms which the great majority of other cryptocurrencies are based on:
Proof-of-Work (PoW) — mining technology. Used in Bitcoin, Ethereum, Litecoin, Monero, etc.
Proof-of-Stake (PoS) and its derivatives — forging technology. Used in Nxt, PeerCoin, NEO, PRIZM, etc.
As a result of a careful analysis of PoW and PoS, which are designed to fight against centralization, there came a conclusion that they both fail to perform their main mission and, in the long run, they lead to the network centralization and poor performance. For this reason, we took a different approach. We use Proof-of-Authority (PoA) algorithm coupled with master nodes, which can ensure the UMI network with decentralization and maximum speed.
The Whitepaper allows you to understand the obvious things. This article will give you a clear and detailed explanation of the technology implemented in the UMI network. Let's glance at the heart of the network right now.
Proof-of-Authority: How and Why It Emerged
It's been over a decade since the first transaction in the Bitcoin network. Over this time, the blockchain technology has undergone some qualitative changes. It's down to the fact that the cryptocurrency world seeing the emerging Proof-of-Work defects in the Bitcoin network year after year has actively searched for ways to eliminate them.
PoW decentralization and reliability has an underside of low capacity and scalability problem that prevents the network from rectifying this shortcoming. Moreover, with the growing popularity of Bitcoin, greed of miners who benefit from high fees resulting from the low network throughput has become a serious problem. Miners have also started to create pools making the network more and more centralized. The “human factor” that purposefully slowed down the network and undermined its security could never be eliminated. All this essentially limits the potential for using PoW-based cryptocurrencies on a bigger scale.
Since PoW upgrade ideas came to nothing, crypto community activists have suggested cardinally new solutions and started to develop other protocols. This is how the Proof-of-Stake technology emerged. However, it proved to be excellent in theory rather than in practice. Overall, PoS-based cryptocurrencies do demonstrate a higher capacity, but the difference is not as striking. Moreover, PoS could not fully solve the scalability issue.
In the hope that it could cope with the disaster plaguing all cryptocurrencies, the community came up with brand new algorithms based on alternative operating principles. One of them is the Proof-of-Authority technology. It was meant to be an effective alternative with a high capacity and a solution to the scalability problem. The idea of using PoA in cryptocurrencies was offered by Gavin Wood — a high-profile blockchain programmer and Ethereum co-founder.
Proof-of-Authority Major Features
PoA's major difference from PoW and PoS lies in the elimination of miner or forger races. Network users do not fight for the right to be the first to create a block and receive an award, as it happens with cryptocurrencies based on other technologies. In this case blockchain's operating principle is substantially different — Proof-of-Authority uses the “reputation system” and only allows trusted nodes to create blocks.
It solves the scalability problem allowing to considerably increase capacity and handle transactions almost instantly without wasting time on unnecessary calculations made by miners and forgers. Moreover, trusted nodes must meet the strict capacity requirements. This is one the main reasons why we have selected PoA since this is the only technology allowing to fully use super-fast nodes.
Due to these features, the Proof-of-Authority algorithm is seen as one of the most effective and promising options for bringing blockchain to various business sectors. For instance, its model perfectly fits the logistics and supply chain management sectors. As an outstanding example, PoA is effectively used by the Microsoft Azure cloud platform to offer various tools for bringing blockchain solutions to businesses.
How the UMI Network Gets Rid of the Defects and Incorporates the Benefits of Proof-of-Authority Method
Any system has both drawbacks and advantages — so does PoA. According to the original PoA model, each trusted node can create a block, while it is technically impossible for ordinary users to interfere with the system operation. This makes PoA-based cryptocurrencies a lot more centralized than those based on PoW or PoS. This has always been the main reason for criticizing the PoA technology.
We understood that only a completely decentralized product could translate our vision of a "hard-to-hit", secure and transparent monetary instrument into reality. Therefore, we started with upgrading its basic operating principle in order to create a product that will incorporate all the best features while eliminating the defects. What we’ve got is a decentralized PoA method. We will try to explain at the elementary level:
- We've divided the nodes in the UMI network into two types: master nodes and validator nodes.
- Only master nodes have the right to create blocks and confirm transactions. Among master node holders there's the UMI team and their trusted partners from across the world. Moreover, we deliberately keep some of our partners — those who hold master nodes — in secret in order to secure ourselves against potential negative influence, manipulation, and threats from third parties. This way we ensure maximum coherent and reliable system operation.
- However, since the core idea behind a decentralized cryptocurrency rules out any kind of trust, the blockchain is secured to prevent master nodes from harming the network in the event of sabotage or collusion. It might happen to Bitcoin or other PoW- or PoS-based cryptocurrencies if, for example, several large mining pools unite and perform a 51% attack. But it can’t happen to UMI. First, the worst that bad faith master node holders can do is to negligibly slow down the network. But the UMI network will automatically respond to it by banning such nodes. Thus, master nodes will prevent any partner from doing intentional harm to the network. Moreover, it will not be able to do this, even if most other partners support it. Nothing — not even quantum computers — will help hackers. Read our post "UMI Blockchain Six-Level Security" for more details.
- A validator node can be launched by any participant. Validator nodes maintain the network by verifying the correctness of blocks and excluding the possibility of fakes. In doing so they increase the overall network security and help master nodes carry out their functions. More importantly, those who hold validator nodes control those who hold master nodes and confirm that the latter don't violate anything and comply with the rules. You can find more details about validator nodes in the article we mentioned above.
- Finally, the network allows all interested users to launch light nodes (SPV), which enables viewing and sending transactions without having to download the blockchain and maintain the network. With light nodes, any network user can make sure if the system is operating properly and doesn't have to download the blockchain to do this.
- In addition, we are developing the ability to protect the network in case 100% of the master nodes (10,000 master nodes in total) are "disabled" for some reason. Even this is virtually impossible, we've thought ahead and in the worst-case scenario, the system will automatically move to PoS. By doing so, it will be able to continue processing transactions. We're going to tell you about this in our next publications.
Thus, the UMI network uses an upgraded version of this technology which possesses all its advantages with drawbacks eliminated. This model is truly decentralized and maximum secured.
Another major drawback of PoA-based cryptos is no possibility to grant incentives to users. PoA doesn't imply forging or mining which allow users to earn cryptocurrency while generating new coins. No reward for maintaining the network is the main reason why the crypto community is not interested in PoA. This is, of course, unfair. With this in mind, the UMI team has found the best solution — the unique staking smart-contract. It allows you to increase the number of your coins up to 40% per month even with no mining or forging meaning the human factor cannot have a negative impact on the decentralization and network performance.
New-Generation Proof-of-Authority
The UMI network uses an upgraded version of PoA technology which possesses all its advantages with drawbacks virtually eliminated. This makes UMI a decentralized, easily scalable, and yet the most secure, productive, profitable and fair cryptocurrency, working for the sake of all people.
The widespread use of UMI can change most aspects of society in different areas, including production, commerce, logistics, and all financial arrangements. We are just beginning this journey and thrilled to have you with us. Let's change the world together!
Best regards, UMI Team!
submitted by UMITop to u/UMITop [link] [comments]

Can you detect block reward increase without a full node?

Some concern trolls claim miners may conspire to increase the base reward to trick SPV wallets. If there is an easy and inexpensive way to detect it, it could be a good counter.
Couldn't you just check the latest block, or a reasonable number of blocks, or the latest block in combination with the SPV block headers?
Is it really necessary to know all the UTXOs to verify the mining reward wasn't increased?
submitted by bch_ftw to btc [link] [comments]

If everyone should run full nodes then why POW?

Preamble: I always post my viewpoint on a sub with an opposing standpoint for the sole reason that the best way to learn is from critique and thus my choice of posting here. Please don’t confuse rebuttals with trolling, it's often just often just a misunderstanding on either or both party’s side. Please refrain from pointing out people or altcoins and evaluate premises on their own merits. Also please consider a comment before down voting.
So, as might be deduced I am against the notion that everyone should run a full node and that instead miners can be ‘trusted’ (due to economic incentives) to provide an honest chain on the one with most proof of work and that SPV is good enough for 99% of users. Hopefully the hypothetical scenario following will help to further (or weaken) my case and understanding. Note that this was a shower thought and might be crushed with a single comment (which will be good and what I’m here for).
Introducing Bitcoin with zero greenhouse gas emissions and improved security consensus rules:
Consider these hypothetical changes to Bitcoin’s consensus rules for a hypothetical upgrade to full nodes (note again this is very quick thoughts so over time this could be improved significantly).
So here we have a new and improved Bitcoin that is environmentally friendly and significantly more secure due to the fact that you can compound security by taking a hash that is buried under sChain's POW for as far back as you wish.
Looking forward to those spotting flaws in my preliminary thoughts on this (I am expecting a lot to be honest).
So in the hypothetical scenario that this POW leaching consensus model holds (after this initial suggestion is optimised to as good as it can be) then do we not have to rethink this every node should be validating all transactions idea?
EDIT: After some discussion I want to make some revisions (mainly to remove any POS'ish incentives the initial description might have created)... 1) There will be no rewards whatsoever for creating blocks 2) The block producers are chosen randomly from UTXO set based on sChain's block hashes
submitted by fiddley2000 to Bitcoin [link] [comments]

An explanation of 0-fee transactions in BCH

This post does not address mobile wallets or SPV wallets.

I am unsure about the ABC GUI wallet but the BitcoinUnlimited GUI wallet has the option to send a transaction with no fee. However, there is a priority requirement enforced by the BCH network that must be met for that free transaction to be included into a block by default (miners can override this but i wont get into that here).

PLEASE NOTE: The following calculation is not 100% correct and the reality is a bit more complicated but this serves as a good general guideline to follow to know if your tx will be accepted with no fee.

To be included in a block with no fee the transaction must destroy at least 1 coinday (57.600.000 "priority points"). The main reason for this is to prevent spam.

You can find how many coindays a transaction will destroy using the following formula: Sum up the "priority points" of each input by multiplying the age of the input in blocks with its value in satoshi. Add all of the "priority points" for all of the inputs together then divide by the size of the tx in bytes (Example below). If the result of that equation is greater than 1coinday (57.600.000) then a miner that supports free transactions under the default configuration will include it in the next block they mine.
The 57.600.000 is the result of COIN * 144 / 250. COIN is the number of satoshis that make up 1 whole bitcoin (100.000.000). 144 is how many blocks are mined in a day on average (6*24). 250 is a number that has to do with the transactions size but i wont get into that right now.

Example: if your tx spends 2 inputs, the first of which is 10 blocks old and has a value of 10.000 satoshi and the second is 20 blocks old and has a value of 7.500 satoshi and the size of the tx is 212 bytes then the transactions priority is (10 * 10.000) + (20 * 7.500) = 250.000 / 212 = ~1.179,25. 1.179,25 is clearly less than 57.600.000 and will need to sit in the mempool for some time to earn enough points to be included in a block. In general the formula favors older transactions that are smaller in size.

I hope this explanation helps you understand the free relay system a little bit better.

Edit: "priority points" is in quotes because its a term i just made up for the value so this explanation was easier to understand. internally in the code it is typically just called dPriority.
Edit for more Information: - By default miners allocation a small percentage of the block (iirc its 5% by default) for free transactions
- this is another less mathy description of the bitcoin days destroyed concept https://en.bitcoin.it/wiki/Bitcoin_Days_Destroyed.
submitted by GregGriffith to btc [link] [comments]

Decred Journal – August 2018

Note: you can read this on GitHub (link), Medium (link) or old Reddit (link) to see all the links.

Development

dcrd: Version 1.3.0 RC1 (Release Candidate 1) is out! The main features of this release are significant performance improvements, including some that benefit SPV clients. Full release notes and downloads are on GitHub.
The default minimum transaction fee rate was reduced from 0.001 to 0.0001 DCkB. Do not try to send such small fee transactions just yet, until the majority of the network upgrades.
Release process was changed to use release branches and bump version on the master branch at the beginning of a release cycle. Discussed in this chat.
The codebase is ready for the new Go 1.11 version. Migration to vgo module system is complete and the 1.4.0 release will be built using modules. The list of versioned modules and a hierarchy diagram are available here.
The testnet was reset and bumped to version 3.
Comments are welcome for the proposal to implement smart fee estimation, which is important for Lightning Network.
@matheusd recorded a code review video for new Decred developers that explains how tickets are selected for voting.
dcrwallet: Version 1.3.0 RC1 features new SPV sync mode, new ticket buyer, new APIs for Decrediton and a host of bug fixes. On the dev side, dcrwallet also migrated to the new module system.
Decrediton: Version 1.3.0 RC1 adds the new SPV sync mode that syncs roughly 5x faster. The feature is off by default while it receives more testing from experienced users. Other notable changes include a design polish and experimental Politeia integration.
Politeia: Proposal editing is being developed and has a short demo. This will allow proposal owners to edit their proposal in response to community feedback before voting begins. The challenges associated with this feature relate to updating censorship tokens and maintaining a clear history of which version comments were made on. @fernandoabolafio produced this architecture diagram which may be of interest to developers.
@degeri joined to perform security testing of Politeia and found several issues.
dcrdata: mainnet explorer upgraded to v2.1 with several new features. For users: credit/debit tx filter on address page, showing miner fees on coinbase transaction page, estimate yearly ticket rewards on main page, cool new hamburger menu and keyboard navigation. For developers: new chain parameters page, experimental Insight API support, endpoints for coin supply and block rewards, testnet3 support. Lots of minor API changes and frontend tweaks, many bug fixes and robustness improvements.
The upcoming v3.0 entered beta and is deployed on beta.dcrdata.org. Check out the new charts page. Feedback and bug reports are appreciated. Finally, the development version v3.1.0-pre is on alpha.dcrdata.org.
Android: updated to be compatible with the latest SPV code and is syncing, several performance issues are worked on. Details were posted in chat. Alpha testing has started, to participate please join #dev and ask for the APK.
iOS: backend is mostly complete, as well as the front end. Support for devices with smaller screens was improved. What works now: creating and recovering wallets, listing of transactions, receiving DCR, displaying and scanning QR codes, browsing account information, SPV connection to peers, downloading headers. Some bugs need fixing before making testable builds.
Ticket splitting: v0.6.0 beta released with improved fee calculation and multiple bug fixes.
docs: introduced new Governance section that grouped some old articles as well as the new Politeia page.
@Richard-Red created a concept repository sandbox with policy documents, to illustrate the kind of policies that could be approved and amended by Politeia proposals.
decred.org: 8 contributors added and 4 removed, including 2 advisors (discussion here).
decredmarketcap.com is a brand new website that shows the most accurate DCR market data. Clean design, mobile friendly, no javascript required.
Dev activity stats for August: 239 active PRs, 219 commits, 25k added and 11k deleted lines spread across 8 repositories. Contributions came from 2-10 developers per repository. (chart)

Network

Hashrate: went from 54 to 76 PH/s, the low was 50 and the new all-time high is 100 PH/s. BeePool share rose to ~50% while F2Pool shrank to 30%, followed by coinmine.pl at 5% and Luxor at 3%.
Staking: 30-day average ticket price is 95.6 DCR (+3.0) as of Sep 3. During the month, ticket price fluctuated between a low of 92.2 and high of 100.5 DCR. Locked DCR represented between 3.8 and 3.9 million or 46.3-46.9% of the supply.
Nodes: there are 217 public listening and 281 normal nodes per dcred.eu. Version distribution: 2% at v1.4.0(pre) (dev builds), 5% on v1.3.0 (RC1), 62% on v1.2.0 (-5%), 22% on v1.1.2 (-2%), 6% on v1.1.0 (-1%). Almost 69% of nodes are v.1.2.0 and higher and support client filters. Data snapshot of Aug 31.

ASICs

Obelisk posted 3 email updates in August. DCR1 units are reportedly shipping with 1 TH/s hashrate and will be upgraded with firmware to 1.5 TH/s. Batch 1 customers will receive compensation for missed shipment dates, but only after Batch 5 ships. Batch 2-5 customers will be receiving the updated slim design.
Innosilicon announced the new D9+ DecredMaster: 2.8 TH/s at 1,230 W priced $1,499. Specified shipping date was Aug 10-15.
FFMiner DS19 claims 3.1 TH/s for Blake256R14 at 680 W and simultaneously 1.55 TH/s for Blake2B at 410 W, the price is $1,299. Shipping Aug 20-25.
Another newly noticed miner offer is this unit that does 46 TH/s at 2,150 W at the price of $4,720. It is shipping Nov 2018 and the stats look very close to Pangolin Whatsminer DCR (which has now a page on asicminervalue).

Integrations

www.d1pool.com joined the list of stakepools for a total of 16.
Australian CoinTree added DCR trading. The platform supports fiat, there are some limitations during the upgrade to a new system but also no fees in the "Early access mode". On a related note, CoinTree is working on a feature to pay household bills with cryptocurrencies it supports.
Three new OTC desks were added to exchanges page at decred.org.
Two mobile wallets integrated Decred:
Reminder: do your best to understand the security and privacy model before using any wallet software. Points to consider: who controls the seed, does the wallet talk to the nodes directly or via middlemen, is it open source or not?

Adoption

Merchants:

Marketing

Targeted advertising report for August was posted by @timhebel. Facebook appeal is pending, some Google and Twitter campaigns were paused and some updated. Read more here.
Contribution to the @decredproject Twitter account has evolved over the past few months. A #twitter_ops channel is being used on Matrix to collaboratively draft and execute project account tweets (including retweets). Anyone with an interest in contributing to the Twitter account can ask for an invitation to the channel and can start contributing content and ideas there for evaluation by the Twitter group. As a result, no minority or unilateral veto over tweets is possible. (from GitHub)

Events

Attended:
For those willing to help with the events:
BAB: Hey all, we are gearing up for conference season. I have a list of places we hope to attend but need to know who besides @joshuam and @Haon are willing to do public speaking, willing to work booths, or help out at them? You will need to be well versed on not just what is Decred, but the history of Decred etc... DM me if you are interested. (#event_planning)
The Decred project is looking for ambassadors. If you are looking for a fun cryptocurrency to get involved in send me a DM or come talk to me on Decred slack. (@marco_peereboom, longer version here)

Media

Decred Assembly episode 21 is available. @jy-p and lead dcrwallet developer @jrick discussed SPV from Satoshi's whitepaper, how it can be improved upon and what's coming in Decred.
Decred Assembly episodes 1-21 are available in audio only format here.
New instructional articles on stakey.club: Decrediton setup, Deleting the wallet, Installing Go, Installing dcrd, dcrd as a Linux service. Available in both English and Portuguese.
Decred scored #32 in the August issue of Chinese CCID ratings. The evaluation model was explained in this interview.
Satis Group rated Decred highly in their cryptoasset valuation research report (PDF). This was featured by several large media outlets, but some did not link to or omitted Decred entirely, citing low market cap.
Featured articles:
Articles:
Videos:

Community Discussions

Community stats:
Comm systems news:
After another debate about chat systems more people began testing and using Matrix, leading to some gardening on that platform:
Highlights:
Reddit: substantive discussion about Decred cons; ecosystem fund; a thread about voter engagement, Politeia UX and trolling; idea of a social media system for Decred by @michae2xl; how profitable is the Obelisk DCR1.
Chats: cross-chain trading via LN; plans for contractor management system, lower-level decision making and contractor privacy vs transparency for stakeholders; measuring dev activity; what if the network stalls, multiple implementations of Decred for more resilience, long term vision behind those extensive tests and accurate comments in the codebase; ideas for process for policy documents, hosting them in Pi and approving with ticket voting; about SPV wallet disk size, how compact filters work; odds of a wallet fetching a wrong block in SPV; new module system in Go; security of allowing Android app backups; why PoW algo change proposal must be specified in great detail; thoughts about NIPoPoWs and SPV; prerequisites for shipping SPV by default (continued); Decred vs Dash treasury and marketing expenses, spending other people's money; why Decred should not invade a country, DAO and nation states, entangling with nation state is poor resource allocation; how winning tickets are determined and attack vectors; Politeia proposal moderation, contractor clearance, the scale of proposals and decision delegation, initial Politeia vote to approve Politeia itself; chat systems, Matrix/Slack/Discord/RocketChat/Keybase (continued); overview of Korean exchanges; no breaking changes in vgo; why project fund burn rate must keep low; asymptotic behavior of Decred and other ccs, tail emission; count of full nodes and incentives to run them; Politeia proposal translations and multilingual environment.
An unusual event was the chat about double negatives and other oddities in languages in #trading.

Markets

DCR started the month at USD 56 / BTC 0.0073 and had a two week decline. On Aug 14 the whole market took a huge drop and briefly went below USD 200 billion. Bitcoin went below USD 6,000 and top 100 cryptos lost 5-30%. The lowest point coincided with Bitcoin dominance peak at 54.5%. On that day Decred dived -17% and reached the bottom of USD 32 / BTC 0.00537. Since then it went sideways in the USD 35-45 / BTC 0.0054-0.0064 range. Around Aug 24, Huobi showed DCR trading volume above USD 5M and this coincided with a minor recovery.
@ImacallyouJawdy posted some creative analysis based on ticket data.

Relevant External

StopAndDecrypt published an extensive article "ASIC Resistance is Nothing but a Blockchain Buzzword" that is much in line with Decred's stance on ASICs.
The ongoing debates about the possible Sia fork yet again demonstrate the importance of a robust dispute resolution mechanism. Also, we are lucky to have the treasury.
Mark B Lundeberg, who found a vulnerability in atomicswap earlier, published a concept of more private peer-to-peer atomic swaps. (missed in July issue)
Medium took a cautious stance on cryptocurrencies and triggered at least one project to migrate to Ghost (that same project previously migrated away from Slack).
Regulation: Vietnam bans mining equipment imports, China halts crypto events and tightens control of crypto chat groups.
Reddit was hacked by intercepting 2FA codes sent via SMS. The announcement explains the impact. Yet another data breach suggests to think twice before sharing any data with any company and shift to more secure authentication systems.
Intel and x86 dumpsterfire keeps burning brighter. Seek more secure hardware and operating systems for your coins.
Finally, unrelated to Decred but good for a laugh: yetanotherico.com.

About This Issue

This is the 5th issue of Decred Journal. It is mirrored on GitHub, Medium and Reddit. Past issues are available here.
Most information from third parties is relayed directly from source after a minimal sanity check. The authors of Decred Journal have no ability to verify all claims. Please beware of scams and do your own research.
Feedback is appreciated: please comment on Reddit, GitHub or #writers_room on Matrix or Slack.
Contributions are welcome too. Some areas are collecting content, pre-release review or translations to other languages. Check out @Richard-Red's guide how to contribute to Decred using GitHub without writing code.
Credits (Slack names, alphabetical order): bee, Haon, jazzah, Richard-Red and thedecreddigest.
submitted by jet_user to decred [link] [comments]

Decred Journal — May 2018

Note: New Reddit look may not highlight links. See old look here. A copy is hosted on GitHub for better reading experience. Check it out, contains photo of the month! Also on Medium

Development

dcrd: Significant optimization in signature hash calculation, bloom filters support was removed, 2x faster startup thanks to in-memory full block index, multipeer work advancing, stronger protection against majority hashpower attacks. Additionally, code refactoring and cleanup, code and test infrastructure improvements.
In dcrd and dcrwallet developers have been experimenting with new modular dependency and versioning schemes using vgo. @orthomind is seeking feedback for his work on reproducible builds.
Decrediton: 1.2.1 bugfix release, work on SPV has started, chart additions are in progress. Further simplification of the staking process is in the pipeline (slack).
Politeia: new command line tool to interact with Politeia API, general development is ongoing. Help with testing will soon be welcome: this issue sets out a test plan, join #politeia to follow progress and participate in testing.
dcrdata: work ongoing on improved design, adding more charts and improving Insight API support.
Android: design work advancing.
Decred's own DNS seeder (dcrseeder) was released. It is written in Go and it properly supports service bit filtering, which will allow SPV nodes to find full nodes that support compact filters.
Ticket splitting service by @matheusd entered beta and demonstrated an 11-way split on mainnet. Help with testing is much appreciated, please join #ticket_splitting to participate in splits, but check this doc to learn about the risks. Reddit discussion here.
Trezor support is expected to land in their next firmware update.
Decred is now supported by Riemann, a toolbox from James Prestwich to construct transactions for many UTXO-based chains from human-readable strings.
Atomic swap with Ethereum on testnet was demonstrated at Blockspot Conference LATAM.
Two new faces were added to contributors page.
Dev activity stats for May: 238 active PRs, 195 master commits, 32,831 added and 22,280 deleted lines spread across 8 repositories. Contributions came from 4-10 developers per repository. (chart)

Network

Hashrate: rapid growth from ~4,000 TH/s at the beginning of the month to ~15,000 at the end with new all time high of 17,949. Interesting dynamic in hashrate distribution across mining pools: coinmine.pl share went down from 55% to 25% while F2Pool up from 2% to 44%. [Note: as of June 6, the hashrate continues to rise and has already passed 22,000 TH/s]
Staking: 30-day average ticket price is 91.3 DCR (+0.8), stake participation is 46.9% (+0.8%) with 3.68 million DCR locked (+0.15). Min price was 85.56. On May 11 ticket price surged to 96.99, staying elevated for longer than usual after such a pump. Locked DCR peaked at 47.17%. jet_user on reddit suggested that the DCR for these tickets likely came from a miner with significant hashrate.
Nodes: there are 226 public listening and 405 normal nodes per dcred.eu. Version distribution: 45% on v1.2.0 (up from 24% last month), 39% on v1.1.2, 15% on v1.1.0 and 1% running outdaded versions.

ASICs

Obelisk team posted an update. Current hashrate estimate of DCR1 is 1200 GH/s at 500 W and may still change. The chips came back at 40% the speed of the simulated results, it is still unknown why. Batch 1 units may get delayed 1-2 weeks past June 30. See discussions on decred and on siacoin.
@SiaBillionaire estimated that 7940 DCR1 units were sold in Batches 1-5, while Lynmar13 shared his projections of DCR1 profitability (reddit).
A new Chinese miner for pre-order was noticed by our Telegram group. Woodpecker WB2 specs 1.5 TH/s at 1200 W, costs 15,000 CNY (~2,340 USD) and the initial 150 units are expected to ship on Aug 15. (pow8.comtranslated)
Another new miner is iBelink DSM6T: 6 TH/s at 2100 W costing $6,300 (ibelink.co). Shipping starts from June 5. Some concerns and links were posted in these two threads.

Integrations

A new mining pool is available now: altpool.net. It uses PPLNS model and takes 1% fee.
Another infrastructure addition is tokensmart.io, a newly audited stake pool with 0.8% fee. There are a total of 14 stake pools now.
Exchange integrations:
OpenBazaar released an update that allows one to trade cryptocurrencies, including DCR.
@i2Rav from i2trading is now offering two sided OTC market liquidity on DCUSD in #trading channel.
Paytomat, payments solution for point of sale and e-commerce, integrated Decred. (missed in April issue)
CoinPayments, a payment processor supporting Decred, developed an integration with @Shopify that allows connected merchants to accept cryptocurrencies in exchange for goods.

Adoption

New merchants:
An update from VotoLegal:
michae2xl: Voto Legal: CEO Thiago Rondon of Appcívico, has already been contacted by 800 politicians and negotiations have started with four pre-candidates for the presidency (slack, source tweet)
Blockfolio rolled out Signal Beta with Decred in the list. Users who own or watch a coin will automatically receive updates pushed by project teams. Nice to see this Journal made it to the screenshot!
Placeholder Ventures announced that Decred is their first public investment. Their Investment Thesis is a clear and well researched overview of Decred. Among other great points it noted the less obvious benefit of not doing an ICO:
By choosing not to pre-sell coins to speculators, the financial rewards from Decred’s growth most favor those who work for the network.
Alex Evans, a cryptoeconomics researcher who recently joined Placeholder, posted his 13-page Decred Network Analysis.

Marketing

@Dustorf published March–April survey results (pdf). It analyzes 166 responses and has lots of interesting data. Just an example:
"I own DECRED because I saw a YouTube video with DECRED Jesus and after seeing it I was sold."
May targeted advertising report released. Reach @timhebel for full version.
PiedPiperCoin hired our advisors.
More creative promos by @jackliv3r: Contributing, Stake Now, The Splitting, Forbidden Exchange, Atomic Swaps.
Reminder: Stakey has his own Twitter account where he tweets about his antics and pours scorn on the holders of expired tickets.
"Autonomy" coin sculpture is available at sigmasixdesign.com.

Events

BitConf in Sao Paulo, Brazil. Jake Yocom-Piatt presented "Decentralized Central Banking". Note the mini stakey on one of the photos. (articletranslated, photos: 1 2 album)
Wicked Crypto Meetup in Warsaw, Poland. (video, photos: 1 2)
Decred Polska Meetup in Katowice, Poland. First known Decred Cake. (photos: 1 2)
Austin Hispanic Hackers Meetup in Austin, USA.
Consensus 2018 in New York, USA. See videos in the Media section. Select photos: booth, escort, crew, moon boots, giant stakey. Many other photos and mentions were posted on Twitter. One tweet summarized Decred pretty well:
One project that stands out at #Consensus2018 is @decredproject. Not annoying. Real tech. Humble team. #BUIDL is strong with them. (@PallerJohn)
Token Summit in New York, USA. @cburniske and @jmonegro from Placeholder talked "Governance and Cryptoeconomics" and spoke highly of Decred. (twitter coverage: 1 2, video, video (from 32 min))
Campus Party in Bahia, Brazil. João Ferreira aka @girino and Gabriel @Rhama were introducing Decred, talking about governance and teaching to perform atomic swaps. (photos)
Decred was introduced to the delegates from Shanghai's Caohejing Hi-Tech Park, organized by @ybfventures.
Second Decred meetup in Hangzhou, China. (photos)
Madison Blockchain in Madison, USA. "Lots of in-depth questions. The Q&A lasted longer than the presentation!". (photo)
Blockspot Conference Latam in Sao Paulo, Brazil. (photos: 1, 2)
Upcoming events:
There is a community initiative by @vj to organize information related to events in a repository. Jump in #event_planning channel to contribute.

Media

Decred scored B (top 3) in Weiss Ratings and A- (top 8) in Darpal Rating.
Chinese institute is developing another rating system for blockchains. First round included Decred (translated). Upon release Decred ranked 26. For context, Bitcoin ranked 13.
Articles:
Audios:
Videos:

Community Discussions

Community stats: Twitter 39,118 (+742), Reddit 8,167 (+277), Slack 5,658 (+160). Difference is between May 5 and May 31.
Reddit highlights: transparent up/down voting on Politeia, combining LN and atomic swaps, minimum viable superorganism, the controversial debate on Decred contractor model (people wondered about true motives behind the thread), tx size and fees discussion, hard moderation case, impact of ASICs on price, another "Why Decred?" thread with another excellent pitch by solar, fee analysis showing how ticket price algorithm change was controversial with ~100x cut in miner profits, impact of ticket splitting on ticket price, recommendations on promoting Decred, security against double spends and custom voting policies.
@R3VoLuT1OneR posted a preview of a proposal from his company for Decred to offer scholarships for students.
dcrtrader gained a couple of new moderators, weekly automatic threads were reconfigured to monthly and empty threads were removed. Currently most trading talk happens on #trading and some leaks to decred. A separate trading sub offers some advantages: unlimited trading talk, broad range of allowed topics, free speech and transparent moderation, in addition to standard reddit threaded discussion, permanent history and search.
Forum: potential social attacks on Decred.
Slack: the #governance channel created last month has seen many intelligent conversations on topics including: finite attention of decision makers, why stakeholders can make good decisions (opposed to a common narrative than only developers are capable of making good decisions), proposal funding and contractor pre-qualification, Cardano and Dash treasuries, quadratic voting, equality of outcome vs equality of opportunity, and much more.
One particularly important issue being discussed is the growing number of posts arguing that on-chain governance and coin voting is bad. Just a few examples from Twitter: Decred is solving an imagined problem (decent response by @jm_buirski), we convince ourselves that we need governance and ticket price algo vote was not controversial, on-chain governance hurts node operators and it is too early for it, it robs node operators of their role, crypto risks being captured by the wealthy, it is a huge threat to the whole public blockchain space, coin holders should not own the blockchain.
Some responses were posted here and here on Twitter, as well as this article by Noah Pierau.

Markets

The month of May has seen Decred earn some much deserved attention in the markets. DCR started the month around 0.009 BTC and finished around 0.0125 with interim high of 0.0165 on Bittrex. In USD terms it started around $81 and finished around $92, temporarily rising to $118. During a period in which most altcoins suffered, Decred has performed well; rising from rank #45 to #30 on Coinmarketcap.
The addition of a much awaited KRW pair on Upbit saw the price briefly double on some exchanges. This pair opens up direct DCR to fiat trading in one of the largest cryptocurrency markets in the world.
An update from @i2Rav:
We have begun trading DCR in large volume daily. The interest around DCR has really started to grow in terms of OTC quote requests. More and more customers are asking about trading it.
Like in previous month, Decred scores high by "% down from ATH" indicator being #2 on onchainfx as of June 6.

Relevant External

David Vorick (@taek) published lots of insights into the world of ASIC manufacturing (reddit). Bitmain replied.
Bitmain released an ASIC for Equihash (archived), an algorithm thought to be somewhat ASIC-resistant 2 years ago.
Three pure PoW coins were attacked this month, one attempting to be ASIC resistant. This shows the importance of Decred's PoS layer that exerts control over miners and allows Decred to welcome ASIC miners for more PoW security without sacrificing sovereignty to them.
Upbit was raided over suspected fraud and put under investigation. Following news reported no illicit activity was found and suggested and raid was premature and damaged trust in local exchanges.
Circle, the new owner of Poloniex, announced a USD-backed stablecoin and Bitmain partnership. The plan is to make USDC available as a primary market on Poloniex. More details in the FAQ.
Poloniex announced lower trading fees.
Bittrex plans to offer USD trading pairs.
@sumiflow made good progress on correcting Decred market cap on several sites:
speaking of market cap, I got it corrected on coingecko, cryptocompare, and worldcoinindex onchainfx, livecoinwatch, and cryptoindex.co said they would update it about a month ago but haven't yet I messaged coinlib.io today but haven't got a response yet coinmarketcap refused to correct it until they can verify certain funds have moved from dev wallets which is most likely forever unknowable (slack)

About This Issue

Some source links point to Slack messages. Although Slack hides history older than ~5 days, you can read individual messages if you paste the message link into chat with yourself. Digging the full conversation is hard but possible. The history of all channels bridged to Matrix is saved in Matrix. Therefore it is possible to dig history in Matrix if you know the timestamp of the first message. Slack links encode the timestamp: https://decred.slack.com/archives/C5H9Z63AA/p1525528370000062 => 1525528370 => 2018-05-05 13:52:50.
Most information from third parties is relayed directly from source after a minimal sanity check. The authors of Decred Journal have no ability to verify all claims. Please beware of scams and do your own research.
Your feedback is precious. You can post on GitHub, comment on Reddit or message us in #writers_room channel.
Credits (Slack names, alphabetical order): bee, Richard-Red, snr01 and solar.
submitted by jet_user to decred [link] [comments]

Merkle Trees and Mountain Ranges - Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments

Original link: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html
Unedited text and originally written by:

Peter Todd pete at petertodd.org
Tue May 17 13:23:11 UTC 2016
Previous message: [bitcoin-dev] Bip44 extension for P2SH/P2WSH/...
Next message: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
# Motivation

UTXO growth is a serious concern for Bitcoin's long-term decentralization. To
run a competitive mining operation potentially the entire UTXO set must be in
RAM to achieve competitive latency; your larger, more centralized, competitors
will have the UTXO set in RAM. Mining is a zero-sum game, so the extra latency
of not doing so if they do directly impacts your profit margin. Secondly,
having possession of the UTXO set is one of the minimum requirements to run a
full node; the larger the set the harder it is to run a full node.

Currently the maximum size of the UTXO set is unbounded as there is no
consensus rule that limits growth, other than the block-size limit itself; as
of writing the UTXO set is 1.3GB in the on-disk, compressed serialization,
which expands to significantly more in memory. UTXO growth is driven by a
number of factors, including the fact that there is little incentive to merge
inputs, lost coins, dust outputs that can't be economically spent, and
non-btc-value-transfer "blockchain" use-cases such as anti-replay oracles and
timestamping.

We don't have good tools to combat UTXO growth. Segregated Witness proposes to
give witness space a 75% discount, in part of make reducing the UTXO set size
by spending txouts cheaper. While this may change wallets to more often spend
dust, it's hard to imagine an incentive sufficiently strong to discourage most,
let alone all, UTXO growing behavior.

For example, timestamping applications often create unspendable outputs due to
ease of implementation, and because doing so is an easy way to make sure that
the data required to reconstruct the timestamp proof won't get lost - all
Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay
use-cases like using the UTXO set for key rotation piggyback on the uniquely
strong security and decentralization guarantee that Bitcoin provides; it's very
difficult - perhaps impossible - to provide these applications with
alternatives that are equally secure. These non-btc-value-transfer use-cases
can often afford to pay far higher fees per UTXO created than competing
btc-value-transfer use-cases; many users could afford to spend $50 to register
a new PGP key, yet would rather not spend $50 in fees to create a standard two
output transaction. Effective techniques to resist miner censorship exist, so
without resorting to whitelists blocking non-btc-value-transfer use-cases as
"spam" is not a long-term, incentive compatible, solution.

A hard upper limit on UTXO set size could create a more level playing field in
the form of fixed minimum requirements to run a performant Bitcoin node, and
make the issue of UTXO "spam" less important. However, making any coins
unspendable, regardless of age or value, is a politically untenable economic
change.


# TXO Commitments

A merkle tree committing to the state of all transaction outputs, both spent
and unspent, we can provide a method of compactly proving the current state of
an output. This lets us "archive" less frequently accessed parts of the UTXO
set, allowing full nodes to discard the associated data, still providing a
mechanism to spend those archived outputs by proving to those nodes that the
outputs are in fact unspent.

Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
type of deterministic, indexable, insertion ordered merkle tree, which allows
new items to be cheaply appended to the tree with minimal storage requirements,
just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
never removed; if an output is spent its status is updated in place. Both the
state of a specific item in the MMR, as well the validity of changes to items
in the MMR, can be proven with log2(n) sized proofs consisting of a merkle path
to the tip of the tree.

At an extreme, with TXO commitments we could even have no UTXO set at all,
entirely eliminating the UTXO growth problem. Transactions would simply be
accompanied by TXO commitment proofs showing that the outputs they wanted to
spend were still unspent; nodes could update the state of the TXO MMR purely
from TXO commitment proofs. However, the log2(n) bandwidth overhead per txin is
substantial, so a more realistic implementation is be to have a UTXO cache for
recent transactions, with TXO commitments acting as a alternate for the (rare)
event that an old txout needs to be spent.

Proofs can be generated and added to transactions without the involvement of
the signers, even after the fact; there's no need for the proof itself to
signed and the proof is not part of the transaction hash. Anyone with access to
TXO MMR data can (re)generate missing proofs, so minimal, if any, changes are
required to wallet software to make use of TXO commitments.


## Delayed Commitments

TXO commitments aren't a new idea - the author proposed them years ago in
response to UTXO commitments. However it's critical for small miners' orphan
rates that block validation be fast, and so far it has proven difficult to
create (U)TXO implementations with acceptable performance; updating and
recalculating cryptographicly hashed merkelized datasets is inherently more
work than not doing so. Fortunately if we maintain a UTXO set for recent
outputs, TXO commitments are only needed when spending old, archived, outputs.
We can take advantage of this by delaying the commitment, allowing it to be
calculated well in advance of it actually being used, thus changing a
latency-critical task into a much easier average throughput problem.

Concretely each block B_i commits to the TXO set state as of block B_{i-n}, in
other words what the TXO commitment would have been n blocks ago, if not for
the n block delay. Since that commitment only depends on the contents of the
blockchain up until block B_{i-n}, the contents of any block after are
irrelevant to the calculation.


## Implementation

Our proposed high-performance/low-latency delayed commitment full-node
implementation needs to store the following data:

1) UTXO set

Low-latency K:V map of txouts definitely known to be unspent. Similar to
existing UTXO implementation, but with the key difference that old,
unspent, outputs may be pruned from the UTXO set.


2) STXO set

Low-latency set of transaction outputs known to have been spent by
transactions after the most recent TXO commitment, but created prior to the
TXO commitment.


3) TXO journal

FIFO of outputs that need to be marked as spent in the TXO MMR. Appends
must be low-latency; removals can be high-latency.


4) TXO MMR list

Prunable, ordered list of TXO MMR's, mainly the highest pending commitment,
backed by a reference counted, cryptographically hashed object store
indexed by digest (similar to how git repos work). High-latency ok. We'll
cover this in more in detail later.


### Fast-Path: Verifying a Txout Spend In a Block

When a transaction output is spent by a transaction in a block we have two
cases:

1) Recently created output

Output created after the most recent TXO commitment, so it should be in the
UTXO set; the transaction spending it does not need a TXO commitment proof.
Remove the output from the UTXO set and append it to the TXO journal.

2) Archived output

Output created prior to the most recent TXO commitment, so there's no
guarantee it's in the UTXO set; transaction will have a TXO commitment
proof for the most recent TXO commitment showing that it was unspent.
Check that the output isn't already in the STXO set (double-spent), and if
not add it. Append the output and TXO commitment proof to the TXO journal.

In both cases recording an output as spent requires no more than two key:value
updates, and one journal append. The existing UTXO set requires one key:value
update per spend, so we can expect new block validation latency to be within 2x
of the status quo even in the worst case of 100% archived output spends.


### Slow-Path: Calculating Pending TXO Commitments

In a low-priority background task we flush the TXO journal, recording the
outputs spent by each block in the TXO MMR, and hashing MMR data to obtain the
TXO commitment digest. Additionally this background task removes STXO's that
have been recorded in TXO commitments, and prunes TXO commitment data no longer
needed.

Throughput for the TXO commitment calculation will be worse than the existing
UTXO only scheme. This impacts bulk verification, e.g. initial block download.
That said, TXO commitments provides other possible tradeoffs that can mitigate
impact of slower validation throughput, such as skipping validation of old
history, as well as fraud proof approaches.


### TXO MMR Implementation Details

Each TXO MMR state is a modification of the previous one with most information
shared, so we an space-efficiently store a large number of TXO commitments
states, where each state is a small delta of the previous state, by sharing
unchanged data between each state; cycles are impossible in merkelized data
structures, so simple reference counting is sufficient for garbage collection.
Data no longer needed can be pruned by dropping it from the database, and
unpruned by adding it again. Since everything is committed to via cryptographic
hash, we're guaranteed that regardless of where we get the data, after
unpruning we'll have the right data.

Let's look at how the TXO MMR works in detail. Consider the following TXO MMR
with two txouts, which we'll call state #0:

0
/ \
a b

If we add another entry we get state #1:

1
/ \
0 \
/ \ \
a b c

Note how it 100% of the state #0 data was reused in commitment #1. Let's
add two more entries to get state #2:

2
/ \
2 \
/ \ \
/ \ \
/ \ \
0 2 \
/ \ / \ \
a b c d e

This time part of state #1 wasn't reused - it's wasn't a perfect binary
tree - but we've still got a lot of re-use.

Now suppose state #2 is committed into the blockchain by the most recent block.
Future transactions attempting to spend outputs created as of state #2 are
obliged to prove that they are unspent; essentially they're forced to provide
part of the state #2 MMR data. This lets us prune that data, discarding it,
leaving us with only the bare minimum data we need to append new txouts to the
TXO MMR, the tips of the perfect binary trees ("mountains") within the MMR:

2
/ \
2 \
\
\
\
\
\
e

Note that we're glossing over some nuance here about exactly what data needs to
be kept; depending on the details of the implementation the only data we need
for nodes "2" and "e" may be their hash digest.

Adding another three more txouts results in state #3:

3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \
/ \
/ \
3 3
/ \ / \
e f g h

Suppose recently created txout f is spent. We have all the data required to
update the MMR, giving us state #4. It modifies two inner nodes and one leaf
node:

4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ \
/ \
/ \
4 3
/ \ / \
e (f) g h

If an archived txout is spent requires the transaction to provide the merkle
path to the most recently committed TXO, in our case state #2. If txout b is
spent that means the transaction must provide the following data from state #2:

2
/
2
/
/
/
0
\
b

We can add that data to our local knowledge of the TXO MMR, unpruning part of
it:

4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ / \
/ / \
/ / \
0 4 3
\ / \ / \
b e (f) g h

Remember, we haven't _modified_ state #4 yet; we just have more data about it.
When we mark txout b as spent we get state #5:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ / \
/ / \
/ / \
5 4 3
\ / \ / \
(b) e (f) g h

Secondly by now state #3 has been committed into the chain, and transactions
that want to spend txouts created as of state #3 must provide a TXO proof
consisting of state #3 data. The leaf nodes for outputs g and h, and the inner
node above them, are part of state #3, so we prune them:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ /
/ /
/ /
5 4
\ / \
(b) e (f)

Finally, lets put this all together, by spending txouts a, c, and g, and
creating three new txouts i, j, and k. State #3 was the most recently committed
state, so the transactions spending a and g are providing merkle paths up to
it. This includes part of the state #2 data:

3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \ \
/ \ \
/ \ \
0 2 3
/ / /
a c g

After unpruning we have the following data for state #5:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ \ / \
/ \ / \
/ \ / \
5 2 4 3
/ \ / / \ /
a (b) c e (f) g

That's sufficient to mark the three outputs as spent and add the three new
txouts, resulting in state #6:

6
/ \
/ \
/ \
/ \
/ \
6 \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
6 6 \
/ \ / \ \
/ \ / \ 6
/ \ / \ / \
6 6 4 6 6 \
/ \ / / \ / / \ \
(a) (b) (c) e (f) (g) i j k

Again, state #4 related data can be pruned. In addition, depending on how the
STXO set is implemented may also be able to prune data related to spent txouts
after that state, including inner nodes where all txouts under them have been
spent (more on pruning spent inner nodes later).


### Consensus and Pruning

It's important to note that pruning behavior is consensus critical: a full node
that is missing data due to pruning it too soon will fall out of consensus, and
a miner that fails to include a merkle proof that is required by the consensus
is creating an invalid block. At the same time many full nodes will have
significantly more data on hand than the bare minimum so they can help wallets
make transactions spending old coins; implementations should strongly consider
separating the data that is, and isn't, strictly required for consensus.

A reasonable approach for the low-level cryptography may be to actually treat
the two cases differently, with the TXO commitments committing too what data
does and does not need to be kept on hand by the UTXO expiration rules. On the
other hand, leaving that uncommitted allows for certain types of soft-forks
where the protocol is changed to require more data than it previously did.


### Consensus Critical Storage Overheads

Only the UTXO and STXO sets need to be kept on fast random access storage.
Since STXO set entries can only be created by spending a UTXO - and are smaller
than a UTXO entry - we can guarantee that the peak size of the UTXO and STXO
sets combined will always be less than the peak size of the UTXO set alone in
the existing UTXO-only scheme (though the combined size can be temporarily
higher than what the UTXO set size alone would be when large numbers of
archived txouts are spent).

TXO journal entries and unpruned entries in the TXO MMR have log2(n) maximum
overhead per entry: a unique merkle path to a TXO commitment (by "unique" we
mean that no other entry shares data with it). On a reasonably fast system the
TXO journal will be flushed quickly, converting it into TXO MMR data; the TXO
journal will never be more than a few blocks in size.

Transactions spending non-archived txouts are not required to provide any TXO
commitment data; we must have that data on hand in the form of one TXO MMR
entry per UTXO. Once spent however the TXO MMR leaf node associated with that
non-archived txout can be immediately pruned - it's no longer in the UTXO set
so any attempt to spend it will fail; the data is now immutable and we'll never
need it again. Inner nodes in the TXO MMR can also be pruned if all leafs under
them are fully spent; detecting this is easy the TXO MMR is a merkle-sum tree,
with each inner node committing to the sum of the unspent txouts under it.

When a archived txout is spent the transaction is required to provide a merkle
path to the most recent TXO commitment. As shown above that path is sufficient
information to unprune the necessary nodes in the TXO MMR and apply the spend
immediately, reducing this case to the TXO journal size question (non-consensus
critical overhead is a different question, which we'll address in the next
section).

Taking all this into account the only significant storage overhead of our TXO
commitments scheme when compared to the status quo is the log2(n) merkle path
overhead; as long as less than 1/log2(n) of the UTXO set is active,
non-archived, UTXO's we've come out ahead, even in the unrealistic case where
all storage available is equally fast. In the real world that isn't yet the
case - even SSD's significantly slower than RAM.


### Non-Consensus Critical Storage Overheads

Transactions spending archived txouts pose two challenges:

1) Obtaining up-to-date TXO commitment proofs

2) Updating those proofs as blocks are mined

The first challenge can be handled by specialized archival nodes, not unlike
how some nodes make transaction data available to wallets via bloom filters or
the Electrum protocol. There's a whole variety of options available, and the
the data can be easily sharded to scale horizontally; the data is
self-validating allowing horizontal scaling without trust.

While miners and relay nodes don't need to be concerned about the initial
commitment proof, updating that proof is another matter. If a node aggressively
prunes old versions of the TXO MMR as it calculates pending TXO commitments, it
won't have the data available to update the TXO commitment proof to be against
the next block, when that block is found; the child nodes of the TXO MMR tip
are guaranteed to have changed, yet aggressive pruning would have discarded that
data.

Relay nodes could ignore this problem if they simply accept the fact that
they'll only be able to fully relay the transaction once, when it is initially
broadcast, and won't be able to provide mempool functionality after the initial
relay. Modulo high-latency mixnets, this is probably acceptable; the author has
previously argued that relay nodes don't need a mempool² at all.

For a miner though not having the data necessary to update the proofs as blocks
are found means potentially losing out on transactions fees. So how much extra
data is necessary to make this a non-issue?

Since the TXO MMR is insertion ordered, spending a non-archived txout can only
invalidate the upper nodes in of the archived txout's TXO MMR proof (if this
isn't clear, imagine a two-level scheme, with a per-block TXO MMRs, committed
by a master MMR for all blocks). The maximum number of relevant inner nodes
changed is log2(n) per block, so if there are n non-archival blocks between the
most recent TXO commitment and the pending TXO MMR tip, we have to store
log2(n)*n inner nodes - on the order of a few dozen MB even when n is a
(seemingly ridiculously high) year worth of blocks.

Archived txout spends on the other hand can invalidate TXO MMR proofs at any
level - consider the case of two adjacent txouts being spent. To guarantee
success requires storing full proofs. However, they're limited by the blocksize
limit, and additionally are expected to be relatively uncommon. For example, if
1% of 1MB blocks was archival spends, our hypothetical year long TXO commitment
delay is only a few hundred MB of data with low-IO-performance requirements.


## Security Model

Of course, a TXO commitment delay of a year sounds ridiculous. Even the slowest
imaginable computer isn't going to need more than a few blocks of TXO
commitment delay to keep up ~100% of the time, and there's no reason why we
can't have the UTXO archive delay be significantly longer than the TXO
commitment delay.

However, as with UTXO commitments, TXO commitments raise issues with Bitcoin's
security model by allowing relatively miners to profitably mine transactions
without bothering to validate prior history. At the extreme, if there was no
commitment delay at all at the cost of a bit of some extra network bandwidth
"full" nodes could operate and even mine blocks completely statelessly by
expecting all transactions to include "proof" that their inputs are unspent; a
TXO commitment proof for a commitment you haven't verified isn't a proof that a
transaction output is unspent, it's a proof that some miners claimed the txout
was unspent.

At one extreme, we could simply implement TXO commitments in a "virtual"
fashion, without miners actually including the TXO commitment digest in their
blocks at all. Full nodes would be forced to compute the commitment from
scratch, in the same way they are forced to compute the UTXO state, or total
work. Of course a full node operator who doesn't want to verify old history can
get a copy of the TXO state from a trusted source - no different from how you
could get a copy of the UTXO set from a trusted source.

A more pragmatic approach is to accept that people will do that anyway, and
instead assume that sufficiently old blocks are valid. But how old is
"sufficiently old"? First of all, if your full node implementation comes "from
the factory" with a reasonably up-to-date minimum accepted total-work
thresholdⁱ - in other words it won't accept a chain with less than that amount
of total work - it may be reasonable to assume any Sybil attacker with
sufficient hashing power to make a forked chain meeting that threshold with,
say, six months worth of blocks has enough hashing power to threaten the main
chain as well.

That leaves public attempts to falsify TXO commitments, done out in the open by
the majority of hashing power. In this circumstance the "assumed valid"
threshold determines how long the attack would have to go on before full nodes
start accepting the invalid chain, or at least, newly installed/recently reset
full nodes. The minimum age that we can "assume valid" is tradeoff between
political/social/technical concerns; we probably want at least a few weeks to
guarantee the defenders a chance to organise themselves.

With this in mind, a longer-than-technically-necessary TXO commitment delayʲ
may help ensure that full node software actually validates some minimum number
of blocks out-of-the-box, without taking shortcuts. However this can be
achieved in a wide variety of ways, such as the author's prev-block-proof
proposal³, fraud proofs, or even a PoW with an inner loop dependent on
blockchain data. Like UTXO commitments, TXO commitments are also potentially
very useful in reducing the need for SPV wallet software to trust third parties
providing them with transaction data.

i) Checkpoints that reject any chain without a specific block are a more
common, if uglier, way of achieving this protection.

j) A good homework problem is to figure out how the TXO commitment could be
designed such that the delay could be reduced in a soft-fork.


## Further Work

While we've shown that TXO commitments certainly could be implemented without
increasing peak IO bandwidth/block validation latency significantly with the
delayed commitment approach, we're far from being certain that they should be
implemented this way (or at all).

1) Can a TXO commitment scheme be optimized sufficiently to be used directly
without a commitment delay? Obviously it'd be preferable to avoid all the above
complexity entirely.

2) Is it possible to use a metric other than age, e.g. priority? While this
complicates the pruning logic, it could use the UTXO set space more
efficiently, especially if your goal is to prioritise bitcoin value-transfer
over other uses (though if "normal" wallets nearly never need to use TXO
commitments proofs to spend outputs, the infrastructure to actually do this may
rot).

3) Should UTXO archiving be based on a fixed size UTXO set, rather than an
age/priority/etc. threshold?

4) By fixing the problem (or possibly just "fixing" the problem) are we
encouraging/legitimising blockchain use-cases other than BTC value transfer?
Should we?

5) Instead of TXO commitment proofs counting towards the blocksize limit, can
we use a different miner fairness/decentralization metric/incentive? For
instance it might be reasonable for the TXO commitment proof size to be
discounted, or ignored entirely, if a proof-of-propagation scheme (e.g.
thinblocks) is used to ensure all miners have received the proof in advance.

6) How does this interact with fraud proofs? Obviously furthering dependency on
non-cryptographically-committed STXO/UTXO databases is incompatible with the
modularized validation approach to implementing fraud proofs.


# References

1) "Merkle Mountain Ranges",
Peter Todd, OpenTimestamps, Mar 18 2013,
https://github.com/opentimestamps/opentimestamps-serveblob/mastedoc/merkle-mountain-range.md

2) "Do we really need a mempool? (for relay nodes)",
Peter Todd, bitcoin-dev mailing list, Jul 18th 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009479.html

3) "Segregated witnesses and validationless mining",
Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012103.html

--
https://petertodd.org 'peter'[:-1]@petertodd.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20160517/33f69665/attachment-0001.sig>
submitted by Godballz to CryptoTechnology [link] [comments]

Ultimate Bitcoin Stress Test - Monday June 22nd - 13:00 GMT

Three days ago, CoinWallet.eu initiated a relatively limited stress test on the Bitcoin blockchain to determine whether or not we alone could have a large impact on the ecosystem as a whole. While our initial tests merely created full blocks for a multiple hour period, transaction confirmation times remained within 6 blocks for most transactions. This test was both limited and basic.
Today we undertook a similar testing initiative once again, this time with a modified methodology. The result was roughly 3 hours of full blocks combined with increased confirmation times for many Bitcoin transactions. By selecting random transactions that were not initiated by our team, we were able to determine that many standard fee transactions were taking 2-5 blocks before receiving a single confirmation.
Bitcoin is at a breaking point, yet the core developers are too wound up in petty arguments to create the required modifications for long term sustainability. If nothing is done, Bitcoin will never be anything more than a costly science project. By stress testing the system, we hope to make a clear case for the increased block size by demonstrating the simplicity of a large scale spam attack on the network.
The plan - We have setup 10 Bitcoin servers that will send approximately 2 transactions per second each - Each of these transactions will be approximately 3kb in size and will each spend to 10-20 addresses - The outputs will then be combined to create large 15-30kb transactions automatically pointing back to the original Bitcoin servers. Example: https://blockchain.info/tx/888c5ccbe3261dac4ac0ba5a64747777871b7b983e2ca1dd17e9fc8afb962519
The target will be to generate 1mb worth of transaction data every 5 minutes. At a cost of 0.0001 per kb (as per standard fees) this stress test will cost approximately 0.1 BTC every five minutes. Another way to look at the cost is 0.1 BTC per full block that we generate. We have allocated 20 BTC for this test, and therefore will be able to single handedly fill 100 blocks, or 32 hours worth of blocks. However, we will stop pushing transactions after 24 hours at 13:00 GMT Tuesday June 23.
Predictions
Conclusions
For the sake of avoiding un-necessary calculations, lets assume that each block is 926kb in size, the average normal Bitcoin transaction volume is 600kb per block, and CoinWallet.eu will be pushing 2mb of transaction data into the network every 10 minutes. Under these conditions, the amount of transaction data being pushed to the network every 10 minutes (or every average block) will be ~2600kb. This will result in a 1674 kb backlog every 10 minutes.
By 14:00 GMT Monday June 22, the mempool of standard fee transactions will be 10mb By 24:00 GMT Monday June 22nd, the mempool of standard fee transactions will be 130mb By 13:00 GMT Tuesday June 22rd, the mempool of standard fee transactions will be 241mb
At this point the backlog of transactions will be approximately 241 blocks, or 1.67 days. When the average new transactions are factored into the equation, the backlog could drag on for 2-3 days. At this point, questions are raised such as whether or not this will cause a "crash landing." It is impossible to know with certainty, however we are anxiously looking forward to Monday.
submitted by CoinWalleteu to Bitcoin [link] [comments]

Fantom Improvement Proposal 1 : Proof of Stake

Fantom Improvement Proposal 1 : Proof of Stake
Medium Article: https://medium.com/fantomfoundation/fantom-improvement-proposal-1-proof-of-stake-fip-1-17bbbe225e70

EIP-1 Proof of Stake: https://github.com/Fantom-foundation/FIPs/blob/masteFIPS/fip-1.md
Introduction
Fantom is a new distributed ledger that is a secure, fast, decentralised, and permissionless network, allowing anyone to transact or build applications on top of it.
In order to secure the network, Fantom has chosen to employ a “Proof of Stake” token model, borrowing from some of the best ideas out there in the team’s opinion.
There will be two types of nodes on the network: validator and listening nodes.
Validator nodes actively participate in the consensus of the network to validate transactions. A supermajority (⅔) of the total validating power of the network is needed to confirm a transaction to finality. These nodes will require a minimum stake.
Listening nodes connect to other nodes in the network and synchronize the entire ledger. They can submit transactions to the network independent of other nodes. However, they do not participate in consensus, and no staking is required.
The total validating power of the network is the total number of votes an event block can receive in the network, of which a minimum ⅔ is required to achieve finality. Note that an event block that contains no transactions can still be validated by the network, and a block reward will be earned, but no transactions fees.

Deliverable

The deliverable of Fantom’s Proof of Stake model is to:
  • Encourage stakeholders to participate in network validation via attractive and sustainable rewards for staking
  • Achieve predictable transaction and storage costs
  • Create a positive feedback loop to encourage the growth in the network. As demand for the network grows, returns for validators should also grow, which in turn should increase the demand for FTM.

Key Features for Network Users

Users on the network will be able to use their tokens in two ways:
  1. Transaction-based staking: Any participant of the network, including a wallet user, can stake a percentage of tokens to gain a percentage of guaranteed transaction volume on the network
  2. Paying for transactions: Like Ethereum or Bitcoin, users will be able to pay per transaction to be confirmed by the network

Transaction-based Staking

Owning a percentage of staked FTM tokens will guarantee a percentage of nominal network processing capacity at all times. Given a user’s FTM holding, there will be a guaranteed amount of gas a user can spend per block. This is also known as “transaction-based staking”.
It is extremely unlikely that all FTM holders will constantly use all of their allotted capacity. In addition, actual network capacity is likely to exceed nominal capacity. It is therefore likely that significant free capacity will be available in most blocks. That free capacity will be available to users in proportion to their staked weight. This will allow users who do not own many tokens, but are active in the system, to have preferred access to processing capacity.
FTM has a maximum supply of 3.175 billion tokens, of which more than 30% are available for block rewards on mainnet launch. The foundation has been actively purchasing FTM on the market over time in order to increase our block rewards, essentially removing it from total circulating supply.

Predictable transaction and storage costs

Transaction costs will be expressed in terms of an internal accounting system called Fantom Gas (“FTG”).
FTG will operate in a similar manner to gas on the Ethereum network at a virtual machine level. There will be a cost associated with each op code executed by the register-based virtual machine (costs will be specified in future technical documents. For now Fantom will follow the Ethereum Virtual Machine (EVM) gas costs as specified in “Appendix G. Fee Schedule” of the Ethereum “Yellow Paper” as Fantom is currently using Golang and Rust Implementations of the EVM). The relationship between FTM and FTG is similar to the relationship between Wei and Gas in Ethereum:
FTM = FTG_price\FTG*

Where:
FTM: Fantom Token
FTG: Fantom Gas
FTG_price: The price of FTG in terms of FTM. This will be discussed below.

Fantom aims to achieve predictable transaction and storage costs to give users certainly over the cost of running services on the network. With networks such as Ethereum, the cost per transaction in terms of Wei can vary greatly according to the gas price.
As such, Fantom proposes the creation of a Special Purpose Vehicle (“SPV”) with a built-in exchange for FTG in the network. Users will be able to buy FTG to pay for computation in advance.
FTG represents a fixed amount of processing capacity.
FTG can be bought only with FTM, via an internal exchange.
There will be an internal price “oracle” for the FTG/FTM exchange rate. FTM holders will vote on the exchange rate.
A reserve pool (SPV) will be built (holding x amount of FTM) so that there will be liquidity for the exchange.
The reserve pool will also serve to guarantee a minimum block fee during periods when transactions do not cover validator costs (more details to be added).
Any user can buy FTG using FTM using the prevailing exchange rate. From the user’s perspective: FTM debit, FTG credit. The opposite will be true for the SPV.
Users can also convert FTG back to FTM via the exchange, subject to a 10% fee. From the user’s perspective: FTM credit, FTG debit. The opposite will be true for the SPV.
FTG hoarding will be discouraged in a natural way: as FTG roughly follows some cloud computing/storage index, its value will slowly decline versus fiat over time, given the historical decline in both computing and storage costs. This will be a natural disincentive to hoarding.

How are fees paid?

If a user holds FTG, this will be used for tx fees. User: FTG debit. SPV: FTG credit, FTM debit. Validators: FTM credit.
If a user does not have FTG, FTM will be used directly at prevailing rate. User: FTM debit. SPV: FTM credit. Validators: FTM credit.
However, there are several open issues to confirm. The fee split between the SPV and validators is crucial. Part of the SPV income will be a share of transaction fees, as well as on the spread of users selling back FTG to FTM. However, the SPV will need to guarantee minimum validator income. An additional risk is the price volatility in the FTM/FTG price. Consider the following scenario:
Assume 1 FTM = 3 FTG.
A user buys 30 FTG for 10 FTM.
FTM crashes in terms of FTG, and now 1 FTM = 1 FTG.
The user sells 30 FTG back to the exchange for 27 FTM (30 less 10%)
Result: a net gain of 17 FTM for the user, a net loss of 17 FTM for the SPV
To remove this risk, we set a rule that a user can never make a profit from converting back FTG to FTM. This will be discussed in further detail.
The SPV could accumulate FTM over time. Users of the network can participate in on-chain governance (We will discuss this in more detail in future posts), as to what the FTM will be used for. For example, users may vote to burn FTM, or choose to distribute it to validators as additional staking rewards. We believe that this should drive demand for FTM, as validating becomes more attractive.

Organic growth of network capacity according to demand

Fantom predicts that network capacity will grow in line with transaction volume. As a result, the same percentage of FTM tokens held should, over time, give access to a larger processing capacity.

Incentives for active users

The weight given to users in the system will depend on two factors:
  1. Their staked FTM holdings, and
  2. The measure of activity over the past 6 months, with more recent activity weighted more heavily.
Activity is defined based on the amount of gas spent over time in the transactions submitted by the node to the network. There will be a minimum amount gas required to be spent per node in order to receive rewards. This should incentivise nodes to use the network. This is known as Proof of Importance, an idea that has been expressed in other platforms such as NEO.
We will develop a suitable formula which will combine these two factors.

Key Features for Developers

Dapp-based staking

One of the issues with the current Ethereum Proof of Work model is that developers are not incentivised to create and run Dapps on the network. Instead, Ethereum has been used to largely to conduct Initial Coin Offerings to fund projects in competition to the network.
In order to incentivise the growth and development of Dapps on Fantom. We propose a concept called Dapp-based staking. A developer who deploys a Dapp can stake a certain number of FTM to the network, and users will be able to use the application for free according to the rules set by the developer.
This will be feature of smart contracts on the Fantom network, where a developer can pay in FTM into the contract account to allow free use of the app as long as there are FTM left. The contract account has a FTG balance, fees are first taken from this balance, and only secondary taken from the user. This way as long as there are funds remaining the dapp is free. If the owner leaves, users can still fund the contract itself.

Network Validators

Number of validators

In order to ensure a fast network, and also to limit costs, the system will favor the emergence of a reasonable number (50) of high-performance nodes as validators to begin with. The number of validators may grow over time depending on how many users decide to stake (given Fantom will be a permission-less network).
Node performance is defined as:
  1. Processing capacity per second, which can be measured for example by the maximum number of simple transactions per second, and
  2. Networking Throughput
Note: 50 nodes is a starting point so the network can scale safely and properly so users can monitor the network and make sure it is secure as the network grows.

Token staking

In the single-shard model, a validator must stake at least 0.2% of the total FTM supply (6,350,000 FTM) of their own tokens. This number, as well as many other network settings, will be subject to change via on-chain voting. They will also change when Sharding is introduced.

Block Rewards for Validators (Fees)

In return for staking FTM, attesting to correct blocks, signing off on the validity of a block, and proposing blocks, validators will be rewarded in at least two ways:
  • A fixed amount to compensate for the cost of running a node, to ensure that validators do not run nodes at a loss
  • A portion of network transaction fees. This is determined by the total number of transaction fees generated by all transactions in event blocks.
Because transaction prices will essentially be fixed, the key way for validators to increase income is by increasing processing capacity.
Here is a way that would allow users themselves to signal the need for higher transaction capacity. Suppose a user has the choice between staking tokens for transacting and staking tokens for validation (ignoring Dapp-based staking).
When there is plenty of processing capacity, we would expect more of validation staking. But as demand for transactions grow, we could see a shift towards transaction staking. As transaction staking volume exceeds a certain level, this would trigger an increase of the baseline processing limit (equivalent to a block gas limit), which would become effective only when a large majority of nodes prove they have the necessary capacity. The advantage is that this would provide a clear and visible signal to the entire network to increase processing capacity. This might happen even before actual volume increases, as users increase their transaction staking in anticipation of increased future transaction volume (increased buying of FTG could also provide such a signal). Note that this will require a way to occasionally measure validator processing power.

Network Security

Penalties: validators will have a significant amount of FTM token staked, which will be at risk if malicious behaviour is identified. Penalties are necessary in a Proof of Stake system in order to disincentivize bad behaviour and avoid the nothing-at-stake problem, where, absent any penalties, a validator is incentivise to bet on every possible fork of the network, as there is no lost for doing so.
In the Fantom network, we propose three types of penalties:
  • Demurrage Fee: A wallet will need to submit a minimum number of transactions over a given period of time or pay a certain fee. This should encourage network activity. The fee is paid proportionally to validators on the network.
  • Validating rejected event blocks: Nodes will need to stake for each event block they want to earn fees off of. If an event block is not ultimately not confirmed by ⅔ of the total voting power of the network, then the stake is lost, and distributed proportionally to other nodes.
  • Network pruning: Potentially malicious nodes will be quickly eliminated. Slow nodes will be identified and their rating down-voted, making them less likely to be selected for validation.

Future Developments

The team is focused on analyzing key numbers for the concepts listed above, as well as making several improvements.
The numbers the team will be working on include:
  • Formula for calculating staking returns, which is currently a combination of staking amounts and Proof of Importance
  • Percentage of guaranteed transactions in return for a percentage of staked FTM
  • Percentage of guaranteed transactions when staking for a Dapp
  • Block rewards
  • Minimum staking requirements for a validating node
  • Minimum gas / transaction requirements to gain a Proof of Importance Score
  • Quantifying node performance
  • Maximum number of validating nodes before performance degradation occurs
  • Penalties:
    • Demurrage Fee
    • Network Pruning

Other Ideas Fantom is Currently Exploring

In addition to the concepts listed above, Fantom is also exploring the following:
  • Multiple Fantom mainnet chains employing different variations of the “Proof of Stake” model with different parameters, connected through Digital Rights Control Management (DRCM). Given that there is a level of unpredictability in how people will actually behave on the network, users might be able to choose from many chains, and the best will be chosen over time.
  • Transaction mining solutions: Transactions themselves are rewarded. Consequently, nodes are incentivised to a) create transactions and b) validate transactions.
  • Paying for storage: Over time, the network is likely to contain data that becomes unused. For example, Dapps can be abandoned for a variety of reasons. On centralised servers such as AWS, a developer must continue paying for storage costs per byte. However, on networks such as Bitcoin or Ethereum, storage is paid once and stored in perpetuity for free. This leads to the following problems:
    • The network becomes increasingly bloated with unused data.
    • Users have to maintain this unused data to maintain state.
  • We are actively working on incorporating a “rent-based” model into the Fantom network. Further details about this problem can be seen here.
If you have ideas you would like to contribute to the discussion, please contact us at [[email protected]](mailto:[email protected]) or comment on the EIP here: https://github.com/Fantom-foundation/FIPs/blob/masteFIPS/fip-1.md
To find out more about the Fantom project and its technology, visit us here or join us on our social media channels.
Official E-mail Address: [[email protected]](mailto:[email protected])
Official Website: https://www.fantom.foundation
Official Telegram English Chat: https://t.me/fantom_english
Official Telegram Chinese Chat: https://t.me/fantom_chinese
Official Fantom Reddit: https://www.reddit.com/FantomFoundation/
Official Fantom Twitter: https://twitter.com/FantomFDN
Official Github Page: https://github.com/Fantom-foundation
Official Youtube Channel: https://www.youtube.com/c/fantomfoundation
submitted by bmoonn to FantomFoundation [link] [comments]

Understanding Bitcoin - Validity is in the Eye of the Beholder

Preamble
There are currently a lot of misconceptions and misinformation in the bitcoin community about the 'validity' of a blockchain or block, specifically in the case of hard forks. 'Validity' is mentioned a number of times in the bitcoin whitepaper and it is mentioned in different but similar contexts. Currently there seems to be a growing (incorrect) understanding that the validity of the chain is determined by whether the chain follows the original consensus protocol. If this were to be true, then any upgrades to bitcoin that fall outside the original consensus protocol that was initiated in bitcoin in 2009, would be considered 'invalid' and therefore not bitcoin. i.e. any and all hard forks would by definition be considered invalid. This would then mean that there is an argument that the bitcoin that currently exists is in fact not bitcoin at all, as it is not possible to deterministicly sync the full blockchain using a pre 0.8.1 bitcoin client. This is obviously not true as everyone considers the current network to be bitcoin, therefore there must be a more nuanced definition to describe validity. Another growing belief that is being pushed by certain people is that bitcoin consensus follows 'community consensus'. This is also not true, at least in the way it has been presented, and will be debunked in this article. Luckily there is a more nuanced and completed definition of 'validity' and it is described in the white paper but is not well understood by the community.
(If you want to skip straight to the meat of this article, scroll down to the section titled 'Validity of Network Forks'.)
In the whitepaper 'valid', 'invalid' and 'validity' are mentioned 6 times. A number of these times it is mentioned outside the context of network forks but I made a quick summary of them anyway below so as to clear up any confusion. The first time is in the section titled 'Network'. It states;
The steps to run the network are as follows: 1. New transactions are broadcast to all nodes. 2. Each node collects new transactions into a block. 3. Each node works on finding a difficult proof-of-work for its block. 4. When a node finds a proof-of-work, it broadcasts the block to all nodes. 5. Nodes accept the block only if all transactions in it are valid and not already spent. 6. Nodes express their acceptance of the block by working on creating the next block in the chain, using the hash of the accepted block as the previous hash.
The word 'invalid' is used in the same context in the 'Calculations' section of the white paper. Specifically, it states;
We consider the scenario of an attacker trying to generate an alternate chain faster than the honest chain. Even if this is accomplished, it does not throw the system open to arbitrary changes, such as creating value out of thin air or taking money that never belonged to the attacker. Nodes are not going to accept an invalid transaction as payment, and honest nodes will never accept a block containing them. An attacker can only try to change one of his own transactions to take back money he recently spent.
In this context 'valid' and 'invalid' is referring to the validity of individual transactions that have been published into a block. For example a transaction's outputs cannot spend/total more than its combined inputs. This meaning of 'valid' is well understood so I will not discuss this further.    
    One more place in the bitcoin whitepaper that contains a reference to validity is in the 'Incentives' section (This is one of the most important and most misunderstood and under-appreciated sections in the whitepaper). In it it states:
The incentive may help encourage nodes to stay honest. If a greedy attacker is able to assemble more CPU power than all the honest nodes, he would have to choose between using it to defraud people by stealing back his payments, or using it to generate new coins. He ought to find it more profitable to play by the rules, such rules that favour him with more new coins than everyone else combined, than to undermine the system and the validity of his own wealth.
In this context Satoshi is talking about the fact that by undermining bitcoin and the blockchain by trying to steal from it, he is devaluing his own spoils. i.e. there is a disincentive to try and attack the network. This is unfortunately not a well understood concept but is not relevant to the word 'validity' in terms of hard forks.
Another section of the white paper that mentions 'validity' is in the 'Simplified Payment Verification' section. In this section it states;
While network nodes can verify transactions for themselves, the simplified method can be fooled by an attacker's fabricated transactions for as long as the attacker can continue to overpower the network. One strategy to protect against this would be to accept alerts from network nodes when they detect an invalid block, prompting the user's software to download the full block and alerted transactions to confirm the inconsistency.
In this context, an 'invalid' block is simply a block that contains invalid transactions (as discussed earlier in this article) that are being used by an attacker to steal funds. This only works on an SPV (simplified payment verification) node because they do not have the full blockchain to check against. As this is not relevant to the 'validity' of network forks it is outside the scope of this article.
   
   
Validity of Network Forks
The other places that the words 'valid' or 'invalid' are used in the white paper are all in the context that is relevant to network hard forks. In the 'Conclusion' section of the white paper it states;
They vote with their CPU power, expressing their acceptance of valid blocks by working on extending them and rejecting invalid blocks by refusing to work on them. Any needed rules and incentives can be enforced with this consensus mechanism.
The absolutely key word in these sentences is 'vote'. The mining nodes in the network vote in every block by accepting blocks they consider 'valid' and rejecting blocks they consider 'invalid'. Well this then begs the question; if the miners are voting on what is invalid and what is valid, what determines the validity? This is the crux of the issue. The currently held misunderstanding, often purveyed by a number of Core developers is that validity is determined by whether the blocks fit within the current consensus protocol. This is nonsensical. With each block being a vote, this would be the equivalent of when Fifa had a vote for their president in 2011 and the only name on the ballot paper was Sepp Blatter. In fact it is worse than that, as in the case for bitcoin it would be the same name (rules) on the ballet paper for every single block forever. Just like Fifa, this would make bitcoin into a kind of banana republic where the vote is totally meaningless, and in reality, there is no vote at all. This is in total contradiction to the white paper. In fact what it states is "Any needed rules and incentives can be enforced with this consensus mechanism." This means that the rules can be changed and voted upon if the majority of the hash power agrees.
There is another section of the white paper specifically dedicated to this concept and goes into further detail. Although it doesn't directly reference the words 'invalid' or 'valid' it does directly reference the voting of hashing power. This is in the 'Proof-of-Work' section and it states;
The proof-of-work also solves the problem of determining representation in majority decision making. If the majority were based on one-IP-address-one-vote, it could be subverted by anyone able to allocate many IPs. Proof-of-work is essentially one-CPU-one-vote. The majority decision is represented by the longest chain, which has the greatest proof-of-work effort invested in it. If a majority of CPU power is controlled by honest nodes, the honest chain will grow the fastest and outpace any competing chains. To modify a past block, an attacker would have to redo the proof-of-work of the block and all blocks after it and then catch up with and surpass the work of the honest nodes.
The first sentence "The proof-of-work also solves the problem of determining representation in majority decision making." directly links to the statement in the white paper conclusion that "Any needed rules and incentives can be enforced with this consensus mechanism.". What is being stated is that the rules of the system, and any changes to them, can be determined and enforced using the proof-of-work system. It is stating that the majority of the hash power decides their representation in decision making.
There is seemingly currently a sub-section of the bitcoin community and developers pushing the narrative that the 'users' control the network and determine what is 'valid'. This is wrong and is also explained in this section of the white paper. It states; "If the majority were based on one-IP-address-one-vote, it could be subverted by anyone able to allocate many IPs. Proof-of-work is essentially one-CPU-one-vote.". What this is saying is that with proof-of-work you cannot over inflate your representation beyond what you actually represent. There is no computational or systematic way for the network to determine a vote based on what 'users' 'want'. We can't use a twitter poll. We can't get a feeling from comments in a forum. Not only can you not know the intentions of any anonymous or pseudonymous social media accounts, we also cannot even know the intentions of known and identified entities. Even if you could get every supposed network participant in one room and have a direct vote, there would be no way to determine the difference between who is a real network participant who wants the network to thrive and who is against the network. There also would be no way to even apply the result of that vote into the network.
This is a very very key point in proof-of-work and why it was used in bitcoin.
The intentions of arbitrary entities CANNOT be determined. Miners are the only set of entities who's intentions we can determine. This is because bitcoin itself has given the miners their intentions and that is to make as much money as possible using bitcoin. This means that the miner's intentions are alligned with the network participants because the miners want to maximise the value of bitcoin to maximise their profit. If we cannot trust the majority of miners then the entire security model of bitcoin fails.
I will discuss his further in another article as it is a fundamental yet misunderstood aspect of bitcoin that needs further illumination.
   
   
Conclusion
Getting back to the crux of this article, how does this relate to 'validity'? Well as we now know, the miners are the ones who get to make the decision on whether a block is valid or not. This means that each individual miner is able to decide on what block is valid or not valid. This does not mean that different miners will always choose different arbitrary rules. In fact the opposite is true. In almost all situation the miners will chose to follow the exact same set of rules as all the other miners. This is because if one miner makes a decision to follow a new set of rules by considering blocks valid that other miners consider invalid, then their blocks will not be accepted by the rest of the network. In almost all situations this would be considered an adverse situation as the network is stronger working together as a whole, with as much hashing power and network effect as possible. In some situations like in the recent Bitcoin Cash fork it may be considered beneficial to enough entities and miners that a fork is made to occur. This sub-set of miners now view blocks with a new set of rules as valid, that other miners on the original chain consider invalid. This is when a hard fork occurs. Without going into too much detail on the economics of hard forks (an article about this will come later), it is important to realise that if the newly separated network has value and blocks are being mined on it then it will continue to exist. In this situation, what is a valid block? Well a block that is valid on one network will be considered invalid on the other and vice versa. Validity is in the eye of the beholder. From the perspective of a specific miner their own blocks are always valid, but they may consider blocks from other miners invalid and vice versa. The newly introduced concept by certain bitcoin Core developers that 'validity' is whether you follower the current chain or not, is at best incomplete and could be more accurately described as incorrect. Validity is determined by the miners as individual entities and not by the collective community. If the validity of blocks was determined by the collective community then there would be no need for miners at all and the network could be undermined trivially using a social attack.
This new narrative that validity is determined by the collective community has created perverse concepts like 'miners hard forking is an attack on the network', when really all that is happening is that miners are changing what rules determine what is considered valid by using their hash power to vote, which is something that was explained in the white paper before the genesis block had even been mined.
This is part of my 'Understanding Bitcoin' series of articles. If you found this article valid I appreciate any upvotes and tips on my your.org articles which you can find below.
Understanding Bitcoin - Validity is in the Eye of the Beholder
Understanding Bitcoin - What is 'Centralisation'?
Understanding Bitcoin - Incentives & The Power Dynamic
Bitcoin: A Peer-to-Peer Electronic Cash System
submitted by singularity87 to btc [link] [comments]

Blocksize Limit Debate: A few points I'd like to see addressed

I think there's a risk of the perfect being the enemy of the good here. Increasing the limit just ensures business as usual for the time being. It's not either this or the lightning network etc. There are more than enough incentives left to do scalability work even if blocks increase in size. Most people even agree that they'll have to do so eventually anyway. I think the whole problem here is theory vs practice. Beware of analysis paralysis. The stakes of doing a blocksize limit increase are simply not that high and getting consensus for a hard fork will only get more difficult in the future.
submitted by supermari0 to Bitcoin [link] [comments]

Great interview questions for bitcoin engineers

From here...
https://bitcointalk.org/index.php?topic=5006583.0
Questions. Chapter 1: Introduction 1. What are the main Bitcoin terms? 2. What is a Bitcoin address? 3. What is a Bitcoin transaction? 4. What is a Bitcoin block? 5. What is a Bitcoin blockchain? 6. What is a Bitcoin transaction ledger? 7. What is a Bitcoin system? What is a bitcoin (cryptocurrency)? How are they different? 8. What is a full Bitcoin stack? 9. What are two types of issues that digital money have to address? 10. What is a “double-spend” problem? 11. What is a distributed computing problem? What is the other name of this problem? 12. What is an election? 13. What is a consensus? 14. What is the name of the main algorithm that brings the bitcoin network to the consensus? 15. What are the different types of bitcoin clients? What is the difference between these clients? Which client offers the most flexibility? Which client offers the least flexibility? Which client is the most and least secure? 16. What is a bitcoin wallet? 17. What is a confirmed transaction and what is an unconfirmed transaction? Chapter 2: How Bitcoin works. 1. What is the best way to understand transactions in the Bitcoin network? 2. What is a transaction? What does it contain? What is the similarity of a transaction to a double entry ledger? What does input correspond to? What does output correspond to? 3. What are the typical transactions in the bitcoin network? Could you please name three of such transactions and give examples of each type of the transaction? 4. What is a QR and how it is used in the Bitcoin network? Are there different types of QRs? If so, what are the different types? Which type is more informational? What kind of information does it provide? 5. What is SPV? What does this procedure check and what type of clients of the Bitcoin network usually use this procedure? Chapter 3: The Bitcoin client. 1. How to download and install the Core Bitcoin client? 2. What is the best way to test the API available for the Core Bitcoin client without actually programming? What is the interface called? 3. What are the major areas of operations in the Bitcoin client? What can we do with the client? 4. What are the available operations for the Bitcoin addresses? 5. What are the available read operations for the Bitcoin transactions? How is a transaction encoded in the Bitcoin network? What is a raw transaction and what is a decoded transaction? 6. If I want to get information about a transaction that is not related to any address in my own wallet, do I need to change anything in the Bitcoin client configuration? If yes, which option do I need to modify? 7. What are the available read operation for the Bitcoin blocks? 8. What are the available operations for the creation of the transactions in the Bitcoin network? 9. How do you normally need to address the unspent output from the previous transaction in order to use it as an input for a new transaction? 10. What is the mandatory operation after creating a new transaction and before sending this new transaction to the network? What state does the wallet have to be in order to perform this operation? 11. Is the transaction ID immutable (TXID)? If not why, if yes, why and when? 12. What does signing a transaction mean? 13. What are the other options for Bitcoin clients? Are there any libraries that are written for some specific languages? What types of clients do these libraries implement? Chapter 4: Keys, Addresses and Wallets. 1. What is a PKC? When it was developed? What are the main mathematical foundations or functions that PKC is using? 2. What is ECC? Could you please provide the formula of the EC? What is the p and what is the Fp? What are the defined operations in ECC? What is a “point to infinity”? 3. What is a Bitcoin wallet? Does this wallet contain coins? If not, what does it contain then? 4. What is a BIP? What it is used for? 5. What is an encrypted private key? Why would we want to encrypt private keys? 6. What is a paper wallet? What kind of storage it is an example of? 7. What is a nondeterministic wallet? Is it a good wallet or a bad wallet? Could you justify? 8. What is a deterministic wallet? 9. What is an HD wallet? 10. How many keys are needed for one in and out transaction? What is a key pair? Which keys are in the key pair? 11. How many keys are stored in a wallet? 12. How does a public key gets created in Bitcoin? What is a “generator point”? 13. Could you please show on a picture how ECC multiplication is done? 14. How does a private key gets created in Bitcoin? What we should be aware of when creating a new private key? What is CSPRNG? What kind of input should this function be getting? 15. What is a WIF? What is WIF-Compressed? 16. What is Base58 encoding and what is Base58Check encoding? How it is different from Base64 encoding? Which characters are used in Base58? Why Base58Check was invented? What kind of problems does it solve? How is Base58Check encoding is created from Base58 encoding? 17. How can Bitcoin addresses be encoded? Which different encodings are used? Which key is used for the address creation? How is the address created? How this key is used and what is the used formula? 18. Can we visually distinguish between different keys in Base58Check format? If yes, how are they different from each other? What kind of prefixes are used? Could you please provide information about used prefixes for each type of the key? 19. What is an index in HD wallets? How many siblings can exist for a parent in an HD wallet? 20. What is the depth limitation for an HD wallet key hierarchy? 21. What are the main two advantages of an HD wallet comparing to the nondeterministic wallets? 22. What are the risks of non-hardened keys creation in an HD wallet? Could you please describe each of them? 23. What is a chain code in HD wallets? How many different chain code types there are? 24. What is the mnemonic code words? What are they used for? 25. What is a seed in an HD wallet? Is there any other name for it? 26. What is an extended key? How long is it and which parts does it consist of? 27. What is P2SH address? What function are P2SH addresses normally used for? Is that correct to call P2SH address a multi-sig address? Which BIP suggested using P2SH addresses? 28. What is a WIF-compressed private key? Is there such a thing as a compressed private key? Is there such a thing as a compressed public key? 29. What is a vanity address? 30. What is a vanity pool? 31. What is a P2PKH address? What is the prefix for the P2PKH address? 32. How does the owner prove that he is the real owner of some address? What does he have to represent to the network to prove the ownership? Why a perpetrator cannot copy this information and reuse it in the next transactions? 33. What is the rule for using funds that are secured by a cold storage wallet? How many times you can send to the address that is protected by the private key stored in a cold storage? How many times can you send funds from the address that is protected by the private key stored in a cold storage? Chapter 5: Transactions. 1. What is a transaction in Bitcoin? Why is it the most important operation in the Bitcoin ecosystem? 2. What is UTXO? What is one of the important rules of the UTXO? 3. Which language is used to write scripts in Bitcoin ecosystem? What are the features of this language? Which language does it look like? What are the limitations of this language? 4. What is the structure of a transaction? What does transaction consists of? 5. What are the standard transactions in Bitcoin? How many standard transactions there are (as of 2014)? 6. What is a “locking script” and what is an “unlocking script”? What is inside these scripts for a usual operation of P2PKH? What is a signature? Could you please describe in details how locking and unlocking scripts work and draw the necessary diagrams? 7. What is a transaction fee? What does the transaction fee depend on? 8. If you are manually creating transactions, what should you be very careful about? 9. Could you please provide a real life scenario when you might need a P2SH payment and operation? 10. What is the Script operation that is used to store in the blockchain some important data? Is it a good practice? Explain your answer. Chapter 6: The Bitcoin Network. 1. What is the network used in Bitcoin? What is it called? What is the abbreviation? What is the difference between this network architecture and the other network architectures? Could you please describe another network architecture and compare the Bitcoin network and the other network architectures? 2. What is a Bitcoin network? What is an extended Bitcoin network? What is the difference between those two networks? What are the other protocols used in the extended Bitcoin network? Why are these new protocols used? Can you give an example of one such protocol? What is it called? 3. What are the main functions of a bitcoin node? How many of them there are? Could you please name and describe each of them? Which functions are mandatory? 4. What is a full node in the Bitcoin network? What does it do and how does it differ from the other nodes? 5. What is a lightweight node in the Bitcoin network? What is another name of the lightweight node? How lightweight node checks transactions? 6. What are the main problems in the SPV process? What does SPV stand for? How does SPV work and what does it rely on? 7. What is a Sybil attack? 8. What is a transaction pool? Where are transaction pools stored in a Bitcoin network client? What are the two different transaction pools usually available in implementations? 9. What is the main Bitcoin client used in the network? What is the official name of the client and what is an unofficial name of this client? 10. What is UTXO pool? Do all clients keep this pool? Where is it stored? How does it differ from the transaction pools? 11. What is a Bloom filter? Why are Bloom filters used in the Bitcoin network? Were they originally used in the initial SW or were they introduced with a specific BIP? Chapter 7: The Blockchain. 1. What is a blockchain? 2. What is a block hash? Is it really a block hash or is it a hash of something else? 3. What is included in the block? What kind of information? 4. How many parents can one block have? 5. How many children can one block have? Is it a temporary or permanent state of the blockchain? What is the name of this state of the blockchain? 6. What is a Merkle tree? Why does Bitcoin network use Merkle trees? What is the advantage of using Merkle trees? What is the other name of the Merkle tree? What kind of form must this tree have? 7. How are blocks identified in the blockchain? What are the two commonly used identities? Are these identities stored in the blockchain? 8. What is the average size of one transaction? How many transactions are normally in one block? What is the size of a block header? 9. What kind of information do SPV nodes download? How much space do they save by that comparing to what they would need if they had to download the whole blockchain? 10. What is a usual representation of a blockchain? 11. What is a genesis block? Do clients download this block and if yes – where from? What is the number of the genesis block? 12. What is a Merkle root? What is a Merkle path? Chapter 8: Mining and Consensus. 1. What is the main purpose of mining? Is it to get the new coins for the miners? Alternatively, it is something else? Is mining the right or good term to describe the process? 2. What is PoW algorithm? 3. What are the two main incentives for miners to participate in the Bitcoin network? What is the current main incentive and will it be changed in the future? 4. Is the money supply in the Bitcoin network diminishing? If so, what is the diminishing rate? What was the original Bitcoin supply rate and how is it changed over time? Is the diminishing rate time related or rather block related? 5. What is the maximum number of Bitcoins available in the network after all the Bitcoins have been mined? When will all the Bitcoins be mined? 6. What is a decentralized consensus? What is a usual setup to clear transactions? What does a clearinghouse do? 7. What is deflationary money? Are they good or bad usually? What is the bad example of deflationary spiral? 8. What is an emergent consensus? What is the feature of emergent consensus? How does it differ from a usual consensus? What are the main processes out of which this emergent decentralized consensus becomes true? 9. Could you please describe the process of Independent Transaction Verification? What is the list of criteria that are checked against a newly received transaction? Where can these rules be checked? Can they be changed over time? If yes, why would they be changed? 10. Does mining node have to be a full node? If not, what are the other options for a node that is not full to be a mining node? 11. What is a candidate block? What types of nodes in the Bitcoin network create candidate blocks? What is a memory pool? Is there any other name of the memory pool? What are the transactions kept in this memory pool? 12. How are transactions added to the candidate block? How does a candidate block become a valid block? 13. What is the minimum value in the Bitcoin network? What is it called and what is the value? Are there any alternative names? 14. What is the age of the UTXO? 15. How is the priority of a transaction is calculated? What is the exact formula? What are the units of each contributing member? When is a transaction considered to be old? Can low priority transactions carry a zero fee? Will they be processed in this case? 16. How much size in each block is reserved for high priority transactions? How are transactions prioritized for the remaining space? 17. Do transactions expire in Bitcoin? Can transactions disappear in the Bitcoin network? If yes, could you please describe such scenario? 18. What is a generation transaction? Does it have another name? If it does, what is the other name of the transaction? What is the position of the generation transaction in the block? Does it have an input? Is the input usual UTXO? If not – what is the input called? How many outputs there are for the generation transaction? 19. What is the Coinbase data? What is it currently used for? 20. What is little-endian and big-endian formats? Could you please give an example of both? 21. How is the block header constructed? Which fields are calculated and added to the block header? Could you please describe the steps for calculation of the block header fields? 22. What is a mantissa-exponent encoding? How is this encoding used in the Bitcoin network? What is the difficulty target? What is the actual process of mining? What kind of mathematical calculation is executed to conduct mining? 23. Which hash function is used in the Bitcoin mining process? 24. Could you describe the PoW algorithm? What features of the hash function does it depend on? What is the other name of the hash function? What is a nonce? How can we increase the difficulty of the PoW calculation? What do we need to change and how do we need to change this parameter? 25. What is difficulty bits notation? Could you please describe in details how it works? What is the formula for the difficulty notation? 26. Why is difficulty adjustable? Who adjusts it and how exactly? Where is the adjustment made? On which node? How many blocks are taken into consideration to predict the next block issuance rate? What is the change limitation? Does the target difficulty depend on the number of transactions? 27. How is a new block propagated in the network? What kind of verification does each node do? What is the list of criteria for the new block? What kind of process ensures that the miners do not cheat? 28. How does a process of block assembly work? What are the sets of blocks each full node have? Could you please describe these sets of blocks? 29. What is a secondary chain? What does each node do to check this chain and perhaps to promote it to the primary chain? Could you please describe an example when a fork occurs and what happens? 30. How quickly forks are resolved most of the time? Within how many new block periods? 31. Why the next block is generated within 10 minutes from the previous? What is this compromise about? What do designers of the Bitcoin network thought about when implementing this rule? 32. What is a hashing race? How did Bitcoin hashing capacity has changed within years from inception? What kind of hardware devices were initially used and how did the HW utilization evolved? What kind of hardware is used now to do mining? How has the network difficulty improved? 33. What is the size of the field that stores nonce in the block header? What is the limitation and problem of the nonce? Why was an extra nonce created? Was there any intermediate solution? If yes, what was the solution? What are the limitations of the solution? 34. What is the exact solution for the extra nonce? Where does the new space come from? How much space is currently used and what is the range of the extra nonce now? 35. What is a mining pool? Why was it created? How are normally such pools operated? Do they pay regularly to the pool participants? Where are newly created Bitcoins distributed? To which address? How do mining pools make money? How do the mining pools calculate the participation? How are shares earned calculated? 36. What is a managed pool? How is the owner of the pool called? Do pool members need to run full nodes? Explain why or why not? 37. What are the most famous protocols used to coordinate pool activities? What is a block template? How is it used? 38. What is the limitation of a centralized pool? Is there any alternative? If yes, what is it? How is it called? How does it work? 39. What is a consensus attack? What is the main assumption of the Bitcoin network? What can be the targets of the consensus attacks? What can these attacks do and what they cannot do? How much overall capacity of the network do you have to control to exercise a consensus attack? Chapter 9: Alternative Chains, Currencies and Applications. 1. What is the name of alternative coins? Are they built on top of the Bitcoin network? What are examples of them? Is there any alternative approach? Could you please describe some alternatives? 2. Are there any alternatives to the PoW algorithm? If yes – what are the alternatives? Could you please name two or three? 3. What is the operation of the Script language that is used to store a metadata in Bitcoin blockchain? 4. What is a coloured coin? Could you please explain how it is created and how it works? Do you need any special SW to manage coloured coins? 5. What is the difference between alt coins and alt chains? What is a Litecoin? What are the major differences between the Bitcoin and Litecoin? Why so many alt coins have been created? What are they usually based on? 6. What is Scrypt? Where is it used and how is it different from the original algorithm from which it has been created? 7. What is a demurrage currency? Could you please give an example of one blockchain and crypto currency that is demurrage? 8. What is a good example of an alternative algorithm to PoW? What is it called and how is it different from the PoW? Why the alternatives to Bitcoin PoW have been created? What is the main reason for this? What is dual-purpose PoW algorithms? Why have they been created? 9. Is Bitcoin “anonymous” currency? Is it difficult to trace transactions and understand someone’s spending habits? 10. What is Ethereum? What kind of currency does it use? What is the difference from Bitcoin? Chapter 10: Bitcoin security. 1. What is the main approach of Bitcoin security? 2. What are two common mistakes made by newcomers to the world of Bitcoin? 3. What is a root of trust in traditional security settings? What is a root of trust in Bitcoin network? How should you assess security of your system? 4. What is a cold storage and paper wallet? 5. What is a hardware wallet? How is it better than storing private keys on your computer or your smart phone?
submitted by 5tu to BitcoinTechnology [link] [comments]

Bitcoin Hack GENERATES 10 BTC ! 100% WORKING Bitcoin Mining Explained in Detail: Nonce, Merkle Root, SPV,...  Part 15 Cryptography Crashcourse Peter Rizun - Subchains Bitcoin Mining Software of 2019 4 BTC online 👍 - YouTube Bitcoin Miner 2020 no fee

Professional Mining Calculator & Network Hashrate Graph. Pool Fee Exchange Fee Power Cost Rewards BSV Revenue BTC Revenue $ Profit $ Hour Bitcoin SV mining calculator for SHA-256: Price 159.25$, 272.3419G difficulty, 2.0739 EH/s network hashrate, 6.2524 BSV block reward. Bitcoin SV mining pools list and list of best mining software. Accurate Bitcoin mining calculator trusted by millions of cryptocurrency miners since May 2013 - developed by an OG Bitcoin miner looking to maximize on mining profits and calculate ROI for new ASIC miners. Updated in 2020, the newest version of the Bitcoin mining calculator makes it simple and easy to quickly calculate mining profitability for your Bitcoin mining hardware. Bitcoin SV Mining Calculator SPV Channels offer encrypted persistent messaging channels between any Bitcoin participants. Seamlessly integrating offline and direct communications to break down the technical barrriers to enabling the direct peer to peer interactions that Satoshi described as fundamental to the operations of the Bitcoin network.

[index] [33008] [44262] [10522] [10312] [19920] [37425] [14406] [31279] [50639] [30249]

Bitcoin Hack GENERATES 10 BTC ! 100% WORKING

#telegram bitcoin mining bot 2020 #best bitcoin miner 2020 #best bitcoin miner hardware 2020 #bitcoin mining cost 2020 #bitcoin mining calculator 2020 #bitcoin mining difficulty 2020. Category ... #telegram bitcoin mining bot 2019 #best bitcoin miner 2019 #best bitcoin miner hardware 2019 #bitcoin mining cost 2019 #bitcoin mining calculator 2019 #bitcoin mining difficulty 2019. Category ... Shared Excel tool for CIDR/IP subnet calculations and detailed example ... Blockchain header: Merkle roots and SPV transaction verification - Duration: 39 minutes. 14,381 views; 3 years ago; 25:39 ... Comment arriver à un tel résultat en seulement 2 ans avec 100$ de mise de départ ? C'est ce que je vais essayer de vous expliquer dans mes vidéos ici et à ve... #best bitcoin mining app 2020 #bitcoin mining android 2020 #telegram bitcoin mining bot 2020 #best bitcoin miner 2020 #best bitcoin miner hardware 2020 #bitcoin mining cost 2020 #bitcoin mining ...

#