Mining Speed Test

Gridcoin 5.0.0.0-Mandatory "Fern" Release

https://github.com/gridcoin-community/Gridcoin-Research/releases/tag/5.0.0.0
Finally! After over ten months of development and testing, "Fern" has arrived! This is a whopper. 240 pull requests merged. Essentially a complete rewrite that was started with the scraper (the "neural net" rewrite) in "Denise" has now been completed. Practically the ENTIRE Gridcoin specific codebase resting on top of the vanilla Bitcoin/Peercoin/Blackcoin vanilla PoS code has been rewritten. This removes the team requirement at last (see below), although there are many other important improvements besides that.
Fern was a monumental undertaking. We had to encode all of the old rules active for the v10 block protocol in new code and ensure that the new code was 100% compatible. This had to be done in such a way as to clear out all of the old spaghetti and ring-fence it with tightly controlled class implementations. We then wrote an entirely new, simplified ruleset for research rewards and reengineered contracts (which includes beacon management, polls, and voting) using properly classed code. The fundamentals of Gridcoin with this release are now on a very sound and maintainable footing, and the developers believe the codebase as updated here will serve as the fundamental basis for Gridcoin's future roadmap.
We have been testing this for MONTHS on testnet in various stages. The v10 (legacy) compatibility code has been running on testnet continuously as it was developed to ensure compatibility with existing nodes. During the last few months, we have done two private testnet forks and then the full public testnet testing for v11 code (the new protocol which is what Fern implements). The developers have also been running non-staking "sentinel" nodes on mainnet with this code to verify that the consensus rules are problem-free for the legacy compatibility code on the broader mainnet. We believe this amount of testing is going to result in a smooth rollout.
Given the amount of changes in Fern, I am presenting TWO changelogs below. One is high level, which summarizes the most significant changes in the protocol. The second changelog is the detailed one in the usual format, and gives you an inkling of the size of this release.

Highlights

Protocol

Note that the protocol changes will not become active until we cross the hard-fork transition height to v11, which has been set at 2053000. Given current average block spacing, this should happen around October 4, about one month from now.
Note that to get all of the beacons in the network on the new protocol, we are requiring ALL beacons to be validated. A two week (14 day) grace period is provided by the code, starting at the time of the transition height, for people currently holding a beacon to validate the beacon and prevent it from expiring. That means that EVERY CRUNCHER must advertise and validate their beacon AFTER the v11 transition (around Oct 4th) and BEFORE October 18th (or more precisely, 14 days from the actual date of the v11 transition). If you do not advertise and validate your beacon by this time, your beacon will expire and you will stop earning research rewards until you advertise and validate a new beacon. This process has been made much easier by a brand new beacon "wizard" that helps manage beacon advertisements and renewals. Once a beacon has been validated and is a v11 protocol beacon, the normal 180 day expiration rules apply. Note, however, that the 180 day expiration on research rewards has been removed with the Fern update. This means that while your beacon might expire after 180 days, your earned research rewards will be retained and can be claimed by advertising a beacon with the same CPID and going through the validation process again. In other words, you do not lose any earned research rewards if you do not stake a block within 180 days and keep your beacon up-to-date.
The transition height is also when the team requirement will be relaxed for the network.

GUI

Besides the beacon wizard, there are a number of improvements to the GUI, including new UI transaction types (and icons) for staking the superblock, sidestake sends, beacon advertisement, voting, poll creation, and transactions with a message. The main screen has been revamped with a better summary section, and better status icons. Several changes under the hood have improved GUI performance. And finally, the diagnostics have been revamped.

Blockchain

The wallet sync speed has been DRASTICALLY improved. A decent machine with a good network connection should be able to sync the entire mainnet blockchain in less than 4 hours. A fast machine with a really fast network connection and a good SSD can do it in about 2.5 hours. One of our goals was to reduce or eliminate the reliance on snapshots for mainnet, and I think we have accomplished that goal with the new sync speed. We have also streamlined the in-memory structures for the blockchain which shaves some memory use.
There are so many goodies here it is hard to summarize them all.
I would like to thank all of the contributors to this release, but especially thank @cyrossignol, whose incredible contributions formed the backbone of this release. I would also like to pay special thanks to @barton2526, @caraka, and @Quezacoatl1, who tirelessly helped during the testing and polishing phase on testnet with testing and repeated builds for all architectures.
The developers are proud to present this release to the community and we believe this represents the starting point for a true renaissance for Gridcoin!

Summary Changelog

Accrual

Changed

Most significantly, nodes calculate research rewards directly from the magnitudes in EACH superblock between stakes instead of using a two- or three- point average based on a CPID's current magnitude and the magnitude for the CPID when it last staked. For those long-timers in the community, this has been referred to as "Superblock Windows," and was first done in proof-of-concept form by @denravonska.

Removed

Beacons

Added

Changed

Removed

Unaltered

As a reminder:

Superblocks

Added

Changed

Removed

Voting

Added

Changed

Removed

Detailed Changelog

[5.0.0.0] 2020-09-03, mandatory, "Fern"

Added

Changed

Removed

Fixed

submitted by jamescowens to gridcoin [link] [comments]

My personal experience with Innosilicon A10 Pro (6G) 500Mh ASIC ethash miner

EDIT : This is about the 5G version, not the 6G.
Hello,
Since there is not much consumers tests online about the Innosilicon A10 (Ethmaster) Pro (5G) at 500Mh, I decided to share my personal experience through an "anonymous" account.
I bought it around April 2020, arrived in May but for personal reasons I was only able to turn it on this summer :(
The A10 costs me 3242 € + 70 € power supply (Innosilicon 1400W Power Supply) + shipping. I will not reveal where I bought it because this is not an ad, but it was through an european ASIC miner reseller.
I know Ethereum 2.0 is coming and I'm aware this is a gamble. I would not advise you to buy it now, especially knowing Eth 2.0 is really coming now, DeFi is pushing at the gates and I heard rumors there is a 750Mh version coming up.
So, it is my first ASIC miner, I did some ZEC mining with a 4 x 1080Ti mining rig two years go.
EDIT : EthToDoge pointed out in the comments that the A10 isn't an ASIC technically speaking
The A10 is basically a box crammed full of laptop GPUs and some custom firmware and made to look like the Bitcoin ASICS. [Check out the comments for more information]
The A10 mining chains reboots itself every 9 hours on average. When the A10 reboots, it goes into an autotuning mode which can take up to 2 hours, but usually around 1h. When in autotuning, it starts at 0Mh and goes to it's full speed after the autotuning, not mining much during this phase because the autotuning mode causes a lot of invalid shares, up to 20% and going down to 3% when tuning is completed.
The chains temperature are around 63°C, I don't know if this is the reason of the reboot. I'll try later on to get a better air flow. I fixed the temperature issue I had by placing in a better ventilated location, temperature is now around 53°C but that didn't fixed the reboot issue.
miner web interface, you can see the hashrate drop due to the random reboot
Performance settings
I tried balanced and factory modes, and I didn't saw much differences in the reported speed. In a near future I'll have a try with the performance mode but I will monitor the power consumption when trying since the A10 warns me to pay attention to that when I want to enable performance mode in the web interface. The performance mode consumes around 10% to 15% more electricity than the factory mode, without noticing any difference in the hashrate or stability. I didn't had proper tools to measure the power consumption, my A10 was plugged in an UPS and it's load went from 43% usage to 55% so I'm assuming the difference is the extra power consumption.
Changing performance settings causes the miner to go into autotuning.
Autoupdate
The firmware check is working, but I didn't manage to use the autoupdate. I had no problem to manually download the firmware and upload it, so not really a problem.
My device:
Type A10L
Controller Version g1
Build Date 15th of July 2020 06:13 AM
Platform Version a10l_20200715_061347

EDIT : I upgraded to the new firmware a10l_20200901_053652 but that didn't fixed the reboot issue.

Hashrate
I did some monitoring of the A10, here is how it looks

This is in factory mode on Ethermine (updated on Sept 24th) :
Average hashrate of 455Mh/s while running on ethermine
Hashrate of all chains + total hashrate

This is in balanced mode on Ethermine (updated on Sept 25th) :
Average hashrate of 449Mh/s while running on ethermine
Hashrate of all chains + total hashrate

This is in factory mode on Nanopool (updated on Sept 29th) :
Average hashrate of 502Mh/s while running on Nanopool (note that the double reboot in the middle of the graphic was caused by the change of ETH epoch, otherwise the average hashrate is around 512Mh/s.
Hashrate of all chains + total hashrate
As sweeperAA pointed out, the mining pool really matters.

Quick links :
https://whattomine.com/miners/122-innosilicon-a10-pro-500mh
submitted by xananymous to EtherMining [link] [comments]

Why Osana takes so long? (Programmer's point of view on current situation)

I decided to write a comment about «Why Osana takes so long?» somewhere and what can be done to shorten this time. It turned into a long essay. Here's TL;DR of it:
The cost of never paying down this technical debt is clear; eventually the cost to deliver functionality will become so slow that it is easy for a well-designed competitive software product to overtake the badly-designed software in terms of features. In my experience, badly designed software can also lead to a more stressed engineering workforce, in turn leading higher staff churn (which in turn affects costs and productivity when delivering features). Additionally, due to the complexity in a given codebase, the ability to accurately estimate work will also disappear.
Junade Ali, Mastering PHP Design Patterns (2016)
Longer version: I am not sure if people here wanted an explanation from a real developer who works with C and with relatively large projects, but I am going to do it nonetheless. I am not much interested in Yandere Simulator nor in this genre in general, but this particular development has a lot to learn from for any fellow programmers and software engineers to ensure that they'll never end up in Alex's situation, especially considering that he is definitely not the first one to got himself knee-deep in the development hell (do you remember Star Citizen?) and he is definitely not the last one.
On the one hand, people see that Alex works incredibly slowly, equivalent of, like, one hour per day, comparing it with, say, Papers, Please, the game that was developed in nine months from start to finish by one guy. On the other hand, Alex himself most likely thinks that he works until complete exhaustion each day. In fact, I highly suspect that both those sentences are correct! Because of the mistakes made during early development stages, which are highly unlikely to be fixed due to the pressure put on the developer right now and due to his overall approach to coding, cost to add any relatively large feature (e.g. Osana) can be pretty much comparable to the cost of creating a fan game from start to finish. Trust me, I've seen his leaked source code (don't tell anybody about that) and I know what I am talking about. The largest problem in Yandere Simulator right now is its super slow development. So, without further ado, let's talk about how «implementing the low hanging fruit» crippled the development and, more importantly, what would have been an ideal course of action from my point of view to get out. I'll try to explain things in the easiest terms possible.
  1. else if's and lack any sort of refactoring in general
The most «memey» one. I won't talk about the performance though (switch statement is not better in terms of performance, it is a myth. If compiler detects some code that can be turned into a jump table, for example, it will do it, no matter if it is a chain of if's or a switch statement. Compilers nowadays are way smarter than one might think). Just take a look here. I know that it's his older JavaScript code, but, believe it or not, this piece is still present in C# version relatively untouched.
I refactored this code for you using C language (mixed with C++ since there's no this pointer in pure C). Take a note that else if's are still there, else if's are not the problem by itself.
The refactored code is just objectively better for one simple reason: it is shorter, while not being obscure, and now it should be able to handle, say, Trespassing and Blood case without any input from the developer due to the usage of flags. Basically, the shorter your code, the more you can see on screen without spreading your attention too much. As a rule of thumb, the less lines there are, the easier it is for you to work with the code. Just don't overkill that, unless you are going to participate in International Obfuscated C Code Contest. Let me reiterate:
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
Antoine de Saint-Exupéry
This is why refactoring — activity of rewriting your old code so it does the same thing, but does it quicker, in a more generic way, in less lines or simpler — is so powerful. In my experience, you can only keep one module/class/whatever in your brain if it does not exceed ~1000 lines, maybe ~1500. Splitting 17000-line-long class into smaller classes probably won't improve performance at all, but it will make working with parts of this class way easier.
Is it too late now to start refactoring? Of course NO: better late than never.
  1. Comments
If you think that you wrote this code, so you'll always easily remember it, I have some bad news for you: you won't. In my experience, one week and that's it. That's why comments are so crucial. It is not necessary to put a ton of comments everywhere, but just a general idea will help you out in the future. Even if you think that It Just Works™ and you'll never ever need to fix it. Time spent to write and debug one line of code almost always exceeds time to write one comment in large-scale projects. Moreover, the best code is the code that is self-evident. In the example above, what the hell does (float) 6 mean? Why not wrap it around into the constant with a good, self-descriptive name? Again, it won't affect performance, since C# compiler is smart enough to silently remove this constant from the real code and place its value into the method invocation directly. Such constants are here for you.
I rewrote my code above a little bit to illustrate this. With those comments, you don't have to remember your code at all, since its functionality is outlined in two tiny lines of comments above it. Moreover, even a person with zero knowledge in programming will figure out the purpose of this code. It took me less than half a minute to write those comments, but it'll probably save me quite a lot of time of figuring out «what was I thinking back then» one day.
Is it too late now to start adding comments? Again, of course NO. Don't be lazy and redirect all your typing from «debunk» page (which pretty much does the opposite of debunking, but who am I to judge you here?) into some useful comments.
  1. Unit testing
This is often neglected, but consider the following. You wrote some code, you ran your game, you saw a new bug. Was it introduced right now? Is it a problem in your older code which has shown up just because you have never actually used it until now? Where should you search for it? You have no idea, and you have one painful debugging session ahead. Just imagine how easier it would be if you've had some routines which automatically execute after each build and check that environment is still sane and nothing broke on a fundamental level. This is called unit testing, and yes, unit tests won't be able to catch all your bugs, but even getting 20% of bugs identified at the earlier stage is a huge boon to development speed.
Is it too late now to start adding unit tests? Kinda YES and NO at the same time. Unit testing works best if it covers the majority of project's code. On the other side, a journey of a thousand miles begins with a single step. If you decide to start refactoring your code, writing a unit test before refactoring will help you to prove to yourself that you have not broken anything without the need of running the game at all.
  1. Static code analysis
This is basically pretty self-explanatory. You set this thing once, you forget about it. Static code analyzer is another «free estate» to speed up the development process by finding tiny little errors, mostly silly typos (do you think that you are good enough in finding them? Well, good luck catching x << 4; in place of x <<= 4; buried deep in C code by eye!). Again, this is not a silver bullet, it is another tool which will help you out with debugging a little bit along with the debugger, unit tests and other things. You need every little bit of help here.
Is it too late now to hook up static code analyzer? Obviously NO.
  1. Code architecture
Say, you want to build Osana, but then you decided to implement some feature, e.g. Snap Mode. By doing this you have maybe made your game a little bit better, but what you have just essentially done is complicated your life, because now you should also write Osana code for Snap Mode. The way game architecture is done right now, easter eggs code is deeply interleaved with game logic, which leads to code «spaghettifying», which in turn slows down the addition of new features, because one has to consider how this feature would work alongside each and every old feature and easter egg. Even if it is just gazing over one line per easter egg, it adds up to the mess, slowly but surely.
A lot of people mention that developer should have been doing it in object-oritented way. However, there is no silver bullet in programming. It does not matter that much if you are doing it object-oriented way or usual procedural way; you can theoretically write, say, AI routines on functional (e.g. LISP)) or even logical language if you are brave enough (e.g. Prolog). You can even invent your own tiny programming language! The only thing that matters is code quality and avoiding the so-called shotgun surgery situation, which plagues Yandere Simulator from top to bottom right now. Is there a way of adding a new feature without interfering with your older code (e.g. by creating a child class which will encapsulate all the things you need, for example)? Go for it, this feature is basically «free» for you. Otherwise you'd better think twice before doing this, because you are going into the «technical debt» territory, borrowing your time from the future by saying «I'll maybe optimize it later» and «a thousand more lines probably won't slow me down in the future that much, right?». Technical debt will incur interest on its own that you'll have to pay. Basically, the entire situation around Osana right now is just a huge tale about how just «interest» incurred by technical debt can control the entire project, like the tail wiggling the dog.
I won't elaborate here further, since it'll take me an even larger post to fully describe what's wrong about Yandere Simulator's code architecture.
Is it too late to rebuild code architecture? Sadly, YES, although it should be possible to split Student class into descendants by using hooks for individual students. However, code architecture can be improved by a vast margin if you start removing easter eggs and features like Snap Mode that currently bloat Yandere Simulator. I know it is going to be painful, but it is the only way to improve code quality here and now. This will simplify the code, and this will make it easier for you to add the «real» features, like Osana or whatever you'd like to accomplish. If you'll ever want them back, you can track them down in Git history and re-implement them one by one, hopefully without performing the shotgun surgery this time.
  1. Loading times
Again, I won't be talking about the performance, since you can debug your game on 20 FPS as well as on 60 FPS, but this is a very different story. Yandere Simulator is huge. Once you fixed a bug, you want to test it, right? And your workflow right now probably looks like this:
  1. Fix the code (unavoidable time loss)
  2. Rebuild the project (can take a loooong time)
  3. Load your game (can take a loooong time)
  4. Test it (unavoidable time loss, unless another bug has popped up via unit testing, code analyzer etc.)
And you can fix it. For instance, I know that Yandere Simulator makes all the students' photos during loading. Why should that be done there? Why not either move it to project building stage by adding build hook so Unity does that for you during full project rebuild, or, even better, why not disable it completely or replace with «PLACEHOLDER» text for debug builds? Each second spent watching the loading screen will be rightfully interpreted as «son is not coding» by the community.
Is it too late to reduce loading times? Hell NO.
  1. Jenkins
Or any other continuous integration tool. «Rebuild a project» can take a long time too, and what can we do about that? Let me give you an idea. Buy a new PC. Get a 32-core Threadripper, 32 GB of fastest RAM you can afford and a cool motherboard which would support all of that (of course, Ryzen/i5/Celeron/i386/Raspberry Pi is fine too, but the faster, the better). The rest is not necessary, e.g. a barely functional second hand video card burned out by bitcoin mining is fine. You set up another PC in your room. You connect it to your network. You set up ramdisk to speed things up even more. You properly set up Jenkins) on this PC. From now on, Jenkins cares about the rest: tracking your Git repository, (re)building process, large and time-consuming unit tests, invoking static code analyzer, profiling, generating reports and whatever else you can and want to hook up. More importantly, you can fix another bug while Jenkins is rebuilding the project for the previous one et cetera.
In general, continuous integration is a great technology to quickly track down errors that were introduced in previous versions, attempting to avoid those kinds of bug hunting sessions. I am highly unsure if continuous integration is needed for 10000-20000 source lines long projects, but things can be different as soon as we step into the 100k+ territory, and Yandere Simulator by now has approximately 150k+ source lines of code. I think that probably continuous integration might be well worth it for Yandere Simulator.
Is it too late to add continuous integration? NO, albeit it is going to take some time and skills to set up.
  1. Stop caring about the criticism
Stop comparing Alex to Scott Cawton. IMO Alex is very similar to the person known as SgtMarkIV, the developer of Brutal Doom, who is also a notorious edgelord who, for example, also once told somebody to kill himself, just like… However, being a horrible person, SgtMarkIV does his job. He simply does not care much about public opinion. That's the difference.
  1. Go outside
Enough said. Your brain works slower if you only think about games and if you can't provide it with enough oxygen supply. I know that this one is probably the hardest to implement, but…
That's all, folks.
Bonus: Do you think how short this list would have been if someone just simply listened to Mike Zaimont instead of breaking down in tears?
submitted by Dezhitse to Osana [link] [comments]

vectorbt - blazingly fast backtesting and interactive data analysis for quants

I want to share with you a tool that I was continuously developing during the last couple of months.
https://github.com/polakowo/vectorbt

As a data scientist, when I first started flirting with quant trading, I quickly realized that there is a shortage of Python packages that can actually enable me to iterate over a long list of possible strategies and hyper-parameters quickly. Most open-source backtesting libraries are very evolved in terms of functionality, but simply lack speed. Questions like "Which strategy is better: X or Y?" require fast computation and transformation of data. This not only prolongs your lifecycle of designing strategies, but is dangerous after all: limited number of tests is similar to a tunnel vision - it prevents you from seeing the bigger picture and makes you dive into the market blindly.
After trying tweaking pandas, multiprocessing, and even evaluating my strategies on a cluster with Spark, I finally found myself using Numba - a Python library that can compile slow Python code to be run at native machine code speed. And since there were no packages in the Python ecosystem that could even closely match the speed of my own backtests, I made vectorbt.
vectorbt combines pandas, NumPy and Numba sauce to obtain orders-of-magnitude speedup over other libraries. It builds upon the idea that each instance of a trading strategy can be represented in a vectorized form, so multiple strategy instances can be packed into a single multi-dimensional array. In this form, they can processed in a highly efficient manner and compared easily. It also integrates Plotly and ipywidgets to display complex charts and dashboards akin to Tableau right in the Jupyter notebook. You can find basic examples and explanations in the documentation.

Below is an example of doing in total 67,032 tests on three different timeframes of Bitcoin price history to explore how performance of a MACD strategy depends upon various combinations of fast, slow and signal windows:
import vectorbt as vbt import numpy as np import yfinance as yf from itertools import combinations, product # Fetch daily price of Bitcoin price = yf.Ticker("BTC-USD").history(period="max")['Close'] price = price.vbt.split_into_ranges(n=3) # Define hyper-parameter space # 49 fast x 49 slow x 19 signal fast_windows, slow_windows, signal_windows = vbt.indicators.create_param_combs( (product, (combinations, np.arange(2, 51, 1), 2), np.arange(2, 21, 1))) # Run MACD indicator macd_ind = vbt.MACD.from_params( price, fast_window=fast_windows, slow_window=slow_windows, signal_window=signal_windows, hide_params=['macd_ewm', 'signal_ewm'] ) # Long when MACD is above zero AND signal entries = macd_ind.macd_above(0) & macd_ind.macd_above(macd_ind.signal) # Short when MACD is below zero OR signal exits = macd_ind.macd_below(0) | macd_ind.macd_below(macd_ind.signal) # Build portfolio portfolio = vbt.Portfolio.from_signals( price.vbt.tile(len(fast_windows)), entries, exits, fees=0.001, freq='1D') # Draw all window combinations as a 3D volume fig = portfolio.total_return.vbt.volume( x_level='macd_fast_window', y_level='macd_slow_window', z_level='macd_signal_window', slider_level='range_start', template='plotly_dark', trace_kwargs=dict( colorscale='Viridis', colorbar=dict( title='Total return', tickformat='%' ) ) ) fig.show() 

https://reddit.com/link/hxl6bn/video/180sxqa8mzc51/player
From signal generation to data visualization, the example above needs roughly a minute to run.

vectorbt let's you
The current implementation has limitations though:

If it sounds cool enough, try it out! I would love if you'd give me some feedback and contribute to it at some point, as the codebase has grown very fast. Cheers.
submitted by plkwo to algotrading [link] [comments]

UNI Airdrop opens a new phase of DeFi and AESwap is set to become aelf’s trump card to up its DeFi game

UNI Airdrop opens a new phase of DeFi and AESwap is set to become aelf’s trump card to up its DeFi game


Today the decentralized exchange Uniswap officially released its governance token UNI. Due to the popularity of the UNI airdrop, the gas fee for a single transfer has been pushed up to 660gwei, about $5.27. With the release of Uniswap’s token, the DeFi sector has entered a new phase of development, with new opportunities up for grabs.
In the first half of this year, the DeFi sector was thriving as various projects sprang up one after another. The boom caught many people by surprise. The catalyst was the launch of the COMP Token on June 17th. After COMP started trading its price surged, quickly leading to token issuance by a slew of DeFi projects which were much sought after by investors.
Since June 17th, in just over three months, the DeFi sector has seen great fortune being made. The most well-known example is YFI. As of writing, the price of YFI has reached $33592.79, which is even higher than that of Bitcoin at its peak. Even the price of its fork YFII has reached $4627.24.
Where there is exuberance, there is a bubble, just like two sides of the same coin. Recently, the price of SUSHI plummeted after Chef Nomi, founder of SushiSwap, cashed out nearly $14 million. Next to that,Emerald Mine (EMD), a liquidity mining DeFi project on EOS, appeared to be an exit scam. Such events should give investors pause and wonder: has DeFi come to a dead end,and where will DeFi go next?
We all know that the blockchain industry is essentially characterized by its decentralization, anonymity and lack of supervision. This means that anyone can deploy contracts on the public blockchain. Therefore, the ecosystems of public chains are a mixed bunch. There are good projects, but there will also be scams. The ICO craze in 2017 was a hard lesson to learn, and the current DeFi boom is no exception.
However, today’s DeFi is very different from ICO. For example, DeFi has practical applications and brings real returns to users, whereas the projects behind the ICOs never came up with any real product. In addition, the key driving force for DeFi’s further development lies in the innovation in its models.
After seeing so many projects come and go it’s not hard to see that the success of a project in DeFi depends on whether the project’s model has a positive impact on users, such as incentives, user experience, and returns. This is also the fundamental driving force of DeFi projects and hinges on the technical strength of the project, interface design, mechanism design, etc.
As a result, the DeFi projects that focus on the technology and work hard will reach new heights, whereas those exit scams will not survive. Aelf has passed the test of time and showed it belongs to the hard-working category.
AESwap, launched by aelf recently, is the first DeFi project based on the aelf network. It is committed to building a global leading decentralized trading platform and a more efficient, convenient and safer DeFi product than Uniswap. Now it has integrated the features of Token Exchange, adding liquidity to earn income, and creating transaction peer-to-peer.
In the future, aelf will continue to make efforts in cross chain DeFi. Thanks to the unlimited scalability of the aelf blockchain system, cross chain mechanism of protocol layer and multi-level side chain design, aelf is able to keep gas fee low with fast transaction speed. Moreover, applications in the aelf ecosystem can also interact with the Ethereum ecosystem. With its well-developed cross chain mechanism and high-performance contract, aelf is able to solve the problems of limited performance and transaction congestion of Ethereum.
Although the current DeFi sector does have an element of hype and bubble, the products of these DeFi projects do have real market demand. The DeFi sector will continue to grow, integrate and optimize. After the hype is over, the market will eliminate the bad projects and only the good ones will remain.
In this new phase of DeFi development, participants will be less enthusiastic and the growth of DeFi projects will not be as fast. I believe that the development of DeFi will become more and more rational, and AESwap will live up to our expectations.
submitted by Floris-Jan to aelfofficial [link] [comments]

Syscoin Platform’s Great Reddit Scaling Bake-off Proposal

Syscoin Platform’s Great Reddit Scaling Bake-off Proposal

https://preview.redd.it/rqt2dldyg8e51.jpg?width=1044&format=pjpg&auto=webp&s=777ae9d4fbbb54c3540682b72700fc4ba3de0a44
We are excited to participate and present Syscoin Platform's ideal characteristics and capabilities towards a well-rounded Reddit Community Points solution!
Our scaling solution for Reddit Community Points involves 2-way peg interoperability with Ethereum. This will provide a scalable token layer built specifically for speed and high volumes of simple value transfers at a very low cost, while providing sovereign ownership and onchain finality.
Token transfers scale by taking advantage of a globally sorting mempool that provides for probabilistically secure assumptions of “as good as settled”. The opportunity here for token receivers is to have an app-layer interactivity on the speed/security tradeoff (99.9999% assurance within 10 seconds). We call this Z-DAG, and it achieves high-throughput across a mesh network topology presently composed of about 2,000 geographically dispersed full-nodes. Similar to Bitcoin, however, these nodes are incentivized to run full-nodes for the benefit of network security, through a bonded validator scheme. These nodes do not participate in the consensus of transactions or block validation any differently than other nodes and therefore do not degrade the security model of Bitcoin’s validate first then trust, across every node. Each token transfer settles on-chain. The protocol follows Bitcoin core policies so it has adequate code coverage and protocol hardening to be qualified as production quality software. It shares a significant portion of Bitcoin’s own hashpower through merged-mining.
This platform as a whole can serve token microtransactions, larger settlements, and store-of-value in an ideal fashion, providing probabilistic scalability whilst remaining decentralized according to Bitcoin design. It is accessible to ERC-20 via a permissionless and trust-minimized bridge that works in both directions. The bridge and token platform are currently available on the Syscoin mainnet. This has been gaining recent attention for use by loyalty point programs and stablecoins such as Binance USD.

Solutions

Syscoin Foundation identified a few paths for Reddit to leverage this infrastructure, each with trade-offs. The first provides the most cost-savings and scaling benefits at some sacrifice of token autonomy. The second offers more preservation of autonomy with a more narrow scope of cost savings than the first option, but savings even so. The third introduces more complexity than the previous two yet provides the most overall benefits. We consider the third as most viable as it enables Reddit to benefit even while retaining existing smart contract functionality. We will focus on the third option, and include the first two for good measure.
  1. Distribution, burns and user-to-user transfers of Reddit Points are entirely carried out on the Syscoin network. This full-on approach to utilizing the Syscoin network provides the most scalability and transaction cost benefits of these scenarios. The tradeoff here is distribution and subscription handling likely migrating away from smart contracts into the application layer.
  2. The Reddit Community Points ecosystem can continue to use existing smart contracts as they are used today on the Ethereum mainchain. Users migrate a portion of their tokens to Syscoin, the scaling network, to gain much lower fees, scalability, and a proven base layer, without sacrificing sovereign ownership. They would use Syscoin for user-to-user transfers. Tips redeemable in ten seconds or less, a high-throughput relay network, and onchain settlement at a block target of 60 seconds.
  3. Integration between Matic Network and Syscoin Platform - similar to Syscoin’s current integration with Ethereum - will provide Reddit Community Points with EVM scalability (including the Memberships ERC777 operator) on the Matic side, and performant simple value transfers, robust decentralized security, and sovereign store-of-value on the Syscoin side. It’s “the best of both worlds”. The trade-off is more complex interoperability.

Syscoin + Matic Integration

Matic and Blockchain Foundry Inc, the public company formed by the founders of Syscoin, recently entered a partnership for joint research and business development initiatives. This is ideal for all parties as Matic Network and Syscoin Platform provide complementary utility. Syscoin offers characteristics for sovereign ownership and security based on Bitcoin’s time-tested model, and shares a significant portion of Bitcoin’s own hashpower. Syscoin’s focus is on secure and scalable simple value transfers, trust-minimized interoperability, and opt-in regulatory compliance for tokenized assets rather than scalability for smart contract execution. On the other hand, Matic Network can provide scalable EVM for smart contract execution. Reddit Community Points can benefit from both.
Syscoin + Matic integration is actively being explored by both teams, as it is helpful to Reddit, Ethereum, and the industry as a whole.

Proving Performance & Cost Savings

Our POC focuses on 100,000 on-chain settlements of token transfers on the Syscoin Core blockchain. Transfers and burns perform equally with Syscoin. For POCs related to smart contracts (subscriptions, etc), refer to the Matic Network proposal.
On-chain settlement of 100k transactions was accomplished within roughly twelve minutes, well-exceeding Reddit’s expectation of five days. This was performed using six full-nodes operating on compute-optimized AWS c4.2xlarge instances which were geographically distributed (Virginia, London, Sao Paulo Brazil, Oregon, Singapore, Germany). A higher quantity of settlements could be reached within the same time-frame with more broadcasting nodes involved, or using hosts with more resources for faster execution of the process.
Addresses used: 100,014
The demonstration was executed using this tool. The results can be seen in the following blocks:
612722: https://sys1.bcfn.ca/block/6d47796d043bb4c508d29123e6ae81b051f5e0aaef849f253c8f3a6942a022ce
612723: https://sys1.bcfn.ca/block/8e2077f743461b90f80b4bef502f564933a8e04de97972901f3d65cfadcf1faf
612724: https://sys1.bcfn.ca/block/205436d25b1b499fce44c29567c5c807beaca915b83cc9f3c35b0d76dbb11f6e
612725: https://sys1.bcfn.ca/block/776d1b1a0f90f655a6bbdf559ff5072459cbdc5682d7615ff4b78c00babdc237
612726: https://sys1.bcfn.ca/block/de4df0994253742a1ac8ac9eec8d2a8c8b0a6d72c53d6f3caa29bb6c171b0a6b
612727: https://sys1.bcfn.ca/block/e5e167c52a9decb313fbaadf49a5e34cb490f8084f642a850385476d4ef10d70
612728: https://sys1.bcfn.ca/block/ab64d989edc71890e7b5b8491c20e9a27520dc45a5f7c776d3dae79057f59fe7
612729: https://sys1.bcfn.ca/block/5e8b7ecd0e36f99d07e4ea6e135fc952bf7ec30164ab6f4d1e98b0f2d405df6d
612730: https://sys1.bcfn.ca/block/d395df3d31dde60bbb0bece6bd5b358297da878f0beb96be389e5f0e043580a3
It is important to note that this POC is not focused on Z-DAG. The performance of Z-DAG has been benchmarked within realistic network conditions: Whiteblock’s audit is publicly available. Network latency tests showed an average TPS around 15k with burst capacity up to 61k. Zero-latency control group exhibited ~150k TPS. Mainnet testing of the Z-DAG network is achievable and will require further coordination and additional resources.
Even further optimizations are expected in the upcoming Syscoin Core release which will implement a UTXO model for our token layer bringing further efficiency as well as open the door to additional scaling technology currently under research by our team and academic partners. At present our token layer is account-based, similar to Ethereum. Opt-in compliance structures will also be introduced soon which will offer some positive performance characteristics as well. It makes the most sense to implement these optimizations before performing another benchmark for Z-DAG, especially on the mainnet considering the resources required to stress-test this network.

Cost Savings

Total cost for these 100k transactions: $0.63 USD
See the live fee comparison for savings estimation between transactions on Ethereum and Syscoin. Below is a snapshot at time of writing:
ETH price: $318.55 ETH gas price: 55.00 Gwei ($0.37)
Syscoin price: $0.11
Snapshot of live fee comparison chart
Z-DAG provides a more efficient fee-market. A typical Z-DAG transaction costs 0.0000582 SYS. Tokens can be safely redeemed/re-spent within seconds or allowed to settle on-chain beforehand. The costs should remain about this low for microtransactions.
Syscoin will achieve further reduction of fees and even greater scalability with offchain payment channels for assets, with Z-DAG as a resilience fallback. New payment channel technology is one of the topics under research by the Syscoin development team with our academic partners at TU Delft. In line with the calculation in the Lightning Networks white paper, payment channels using assets with Syscoin Core will bring theoretical capacity for each person on Earth (7.8 billion) to have five on-chain transactions per year, per person, without requiring anyone to enter a fee market (aka “wait for a block”). This exceeds the minimum LN expectation of two transactions per person, per year; one to exist on-chain and one to settle aggregated value.

Tools, Infrastructure & Documentation

Syscoin Bridge

Mainnet Demonstration of Syscoin Bridge with the Basic Attention Token ERC-20
A two-way blockchain interoperability system that uses Simple Payment Verification to enable:
  • Any Standard ERC-20 token to be moved from Ethereum to the Syscoin blockchain as a Syscoin Platform Token (SPT), and back to Ethereum
  • Any SPT to be moved from Syscoin to the Ethereum blockchain as an ERC-20 token, and back to Syscoin

Benefits

  • Permissionless
  • No counterparties involved
  • No trading mechanisms involved
  • No third-party liquidity providers required
  • Cross-chain Fractional Supply - 2-way peg - Token supply maintained globally
  • ERC-20s gain vastly improved transactionality with the Syscoin Token Platform, along with the security of bitcoin-core-compliant PoW.
  • SPTs gain access to all the tooling, applications and capabilities of Ethereum for ERC-20, including smart contracts.
https://preview.redd.it/l8t2m8ldh8e51.png?width=1180&format=png&auto=webp&s=b0a955a0181746dc79aff718bd0bf607d3c3aa23
https://preview.redd.it/26htnxzfh8e51.png?width=1180&format=png&auto=webp&s=d0383d3c2ee836c9f60b57eca35542e9545f741d

Source code

https://github.com/syscoin/?q=sysethereum
Main Subprojects

API

Tools to simplify using Syscoin Bridge as a service with dapps and wallets will be released some time after implementation of Syscoin Core 4.2. These will be based upon the same processes which are automated in the current live Sysethereum Dapp that is functioning with the Syscoin mainnet.

Documentation

Syscoin Bridge & How it Works (description and process flow)
Superblock Validation Battles
HOWTO: Provision the Bridge for your ERC-20
HOWTO: Setup an Agent
Developer & User Diligence

Trade-off

The Syscoin Ethereum Bridge is secured by Agent nodes participating in a decentralized and incentivized model that involves roles of Superblock challengers and submitters. This model is open to participation. The benefits here are trust-minimization, permissionless-ness, and potentially less legal/regulatory red-tape than interop mechanisms that involve liquidity providers and/or trading mechanisms.
The trade-off is that due to the decentralized nature there are cross-chain settlement times of one hour to cross from Ethereum to Syscoin, and three hours to cross from Syscoin to Ethereum. We are exploring ways to reduce this time while maintaining decentralization via zkp. Even so, an “instant bridge” experience could be provided by means of a third-party liquidity mechanism. That option exists but is not required for bridge functionality today. Typically bridges are used with batch value, not with high frequencies of smaller values, and generally it is advantageous to keep some value on both chains for maximum availability of utility. Even so, the cross-chain settlement time is good to mention here.

Cost

Ethereum -> Syscoin: Matic or Ethereum transaction fee for bridge contract interaction, negligible Syscoin transaction fee for minting tokens
Syscoin -> Ethereum: Negligible Syscoin transaction fee for burning tokens, 0.01% transaction fee paid to Bridge Agent in the form of the ERC-20, Matic or Ethereum transaction fee for contract interaction.

Z-DAG

Zero-Confirmation Directed Acyclic Graph is an instant settlement protocol that is used as a complementary system to proof-of-work (PoW) in the confirmation of Syscoin service transactions. In essence, a Z-DAG is simply a directed acyclic graph (DAG) where validating nodes verify the sequential ordering of transactions that are received in their memory pools. Z-DAG is used by the validating nodes across the network to ensure that there is absolute consensus on the ordering of transactions and no balances are overflowed (no double-spends).

Benefits

  • Unique fee-market that is more efficient for microtransaction redemption and settlement
  • Uses decentralized means to enable tokens with value transfer scalability that is comparable or exceeds that of credit card networks
  • Provides high throughput and secure fulfillment even if blocks are full
  • Probabilistic and interactive
  • 99.9999% security assurance within 10 seconds
  • Can serve payment channels as a resilience fallback that is faster and lower-cost than falling-back directly to a blockchain
  • Each Z-DAG transaction also settles onchain through Syscoin Core at 60-second block target using SHA-256 Proof of Work consensus
https://preview.redd.it/pgbx84jih8e51.png?width=1614&format=png&auto=webp&s=5f631d42a33dc698365eb8dd184b6d442def6640

Source code

https://github.com/syscoin/syscoin

API

Syscoin-js provides tooling for all Syscoin Core RPCs including interactivity with Z-DAG.

Documentation

Z-DAG White Paper
Useful read: An in-depth Z-DAG discussion between Syscoin Core developer Jag Sidhu and Brave Software Research Engineer Gonçalo Pestana

Trade-off

Z-DAG enables the ideal speed/security tradeoff to be determined per use-case in the application layer. It minimizes the sacrifice required to accept and redeem fast transfers/payments while providing more-than-ample security for microtransactions. This is supported on the premise that a Reddit user receiving points does need security yet generally doesn’t want nor need to wait for the same level of security as a nation-state settling an international trade debt. In any case, each Z-DAG transaction settles onchain at a block target of 60 seconds.

Syscoin Specs

Syscoin 3.0 White Paper
(4.0 white paper is pending. For improved scalability and less blockchain bloat, some features of v3 no longer exist in current v4: Specifically Marketplace Offers, Aliases, Escrow, Certificates, Pruning, Encrypted Messaging)
  • 16MB block bandwidth per minute assuming segwit witness carrying transactions, and transactions ~200 bytes on average
  • SHA256 merge mined with Bitcoin
  • UTXO asset layer, with base Syscoin layer sharing identical security policies as Bitcoin Core
  • Z-DAG on asset layer, bridge to Ethereum on asset layer
  • On-chain scaling with prospect of enabling enterprise grade reliable trustless payment processing with on/offchain hybrid solution
  • Focus only on Simple Value Transfers. MVP of blockchain consensus footprint is balances and ownership of them. Everything else can reduce data availability in exchange for scale (Ethereum 2.0 model). We leave that to other designs, we focus on transfers.
  • Future integrations of MAST/Taproot to get more complex value transfers without trading off trustlessness or decentralization.
  • Zero-knowledge Proofs are a cryptographic new frontier. We are dabbling here to generalize the concept of bridging and also verify the state of a chain efficiently. We also apply it in our Digital Identity projects at Blockchain Foundry (a publicly traded company which develops Syscoin softwares for clients). We are also looking to integrate privacy preserving payment channels for off-chain payments through zkSNARK hub & spoke design which does not suffer from the HTLC attack vectors evident on LN. Much of the issues plaguing Lightning Network can be resolved using a zkSNARK design whilst also providing the ability to do a multi-asset payment channel system. Currently we found a showstopper attack (American Call Option) on LN if we were to use multiple-assets. This would not exist in a system such as this.

Wallets

Web3 and mobile wallets are under active development by Blockchain Foundry Inc as WebAssembly applications and expected for release not long after mainnet deployment of Syscoin Core 4.2. Both of these will be multi-coin wallets that support Syscoin, SPTs, Ethereum, and ERC-20 tokens. The Web3 wallet will provide functionality similar to Metamask.
Syscoin Platform and tokens are already integrated with Blockbook. Custom hardware wallet support currently exists via ElectrumSys. First-class HW wallet integration through apps such as Ledger Live will exist after 4.2.
Current supported wallets
Syscoin Spark Desktop
Syscoin-Qt

Explorers

Mainnet: https://sys1.bcfn.ca (Blockbook)
Testnet: https://explorer-testnet.blockchainfoundry.co

Thank you for close consideration of our proposal. We look forward to feedback, and to working with the Reddit community to implement an ideal solution using Syscoin Platform!

submitted by sidhujag to ethereum [link] [comments]

Mainnet project: an important change. If you are a donor, please read.

Hi everybody.
It has been one week since the mainnet project got the funding and I have an important update to make.
A little bit about the progress: I've found a wonderful developer, who is helping with the library, so it is starting to take some shape. I'm ironing out our REST API, got some useful feedback, continuing to do so. About 0.17% of the total funding spent so far.
The important update though is that I have decided to take the development and spending private, instead of public. Before I explain what that means and why, I understand that it might upset some donors. So, if you have pledged any amount and disagree with my change for any reason - please contact me (DM, or [email protected]) and I'll refund your pledge completely, no questions asked.
(Please sign any message using the address that you used to prove that you sent the funds, see the list of donors here to find your pledge and the link the the funding donation to find which address you sent from).
If more than 50% of pledges ask for money back, I'll just return everything to everybody in full and we'll consider the project cancelled. At that point anyone willing to take on the project (via a new Flipstarter or something), I'll donate the domain to them. Everything that is done so far is MIT licensed, so anyone is free to take it at any moment.
Let the market decide!
I've got to tell you that I'm a bit disappointed with our progress so far. I expected a lot of people willing to earn some money, but I've got only 4 relevant developers, 3 of them passed a very simple test, only one is actually doing anything.
This was not expected by me, when I had promised to work publicly and with BCH developers.
Another problem is that I have a certain vision that I described in the project description. In addition to that vision there is also a lot of experience talking to read.cash users. A lot of them are in countries with very bad Internet (2G, few kilobytes per second), using very old Android phones (10+ years, the size of an iPhone 4 and the speed half of that of iPhone 4).. And I also really hope that someday we will have 100MB blocks, 1GB, 1TB blocks. But now I'm tied in arguments with BCH developers who argue that many current solutions are good enough already and we don't need to change them - just build on top of a few convoluted and complex protocols, just download a block when needed (again, Africa, 2G, 100MB blocks), just download 640,000 block headers, listen to the whole mempool (with 1TB block we'll have 1TB mempool) - it's fine, blocks are tiny... Just send a few queries (now)... Just download a mempool fully.
(To those of you that know what this is about, please don't name names, I'm not here to play the blame game, everybody is entitled to their own opinions. It's fine.)
If your wallet becomes too big - create a new one. It's fine.
Sidenote: my read.cash wallet that gets the fees takes a few hours to open now, and it's barely 9 months old! I find current solutions unacceptable, I want my wallet to open up immediately and handle 100MB blocks as well as 60KB blocks.
I don't want to develop for tiny blocks or tiny wallets that need to be changed every few months.. I want huge blocks! I don't want mainnet to be as brittle as to break at the first sight of success.
A few of these discussions got me really tired and I have no leverage on these guys. They have money now, they have their vision, I have mine, described on the site, they don't want to do it my way. I didn't collect the funds to do it their way.
Yet I have made a commitment to work with them.
This is very tiresome. I feel like I've got myself into a trap - I have to work with these people, they don't want to work on my stuff.
This is just stupid.
One more thing is that now that I have Slack - I'm caught in endless private discussions of people trying to sell me their vision of how stuff should be done or questions about me or read.cash... I didn't sign up for that, I barely have any time to do the work, I don't have time for this, sorry.
Change #1: Private development
Having said that, I'm moving the project to private development.
Frankly, all I care about is to get this project done. I added an additional burden on myself to be do the public development. And it's tiresome.
The plan would be to hire some outside developers, using regular contracts, so that they don't have THEIR ideas on how to do the project and they'll just do what I described.
I think everybody cares about the end result - library working, document being written, etc...
Change #2: Private spending
Hired developers also means salaries. When people (in the real world) know salaries of other people, it leads to conflicts. I went through this experiment (public salaries) once in my life, I won't go through that again. Even people knowing your budget become a problem, since they start to bargain with you. (Again, we're talking about outside developers, they are not interested in BCH success, they are interested in getting as much money as possible)
By private spending I mean that I'll post periodically how much is done and how much funds is approximately left, but no details on who got what for what. Right now there's 99.83% funds left.
Some of you might see it as a money grab or something else - I can't blame you, but I'd rather see this project cancelled by market forces than drown in endless fights about why we should do exactly nothing or their idea, hope for small blocks and use what we have no matter how convoluted or hard it is, or why somebody's hourly rate should be bigger than that guy's.
Will this lead to everyone cancelling their donations? It sure could! It's voluntary funding after all, I can't force anyone to love what I do or how I do it.
If you donated and want a refund to your original address - just ping me.
When this post is 48 hours old, if more than 50% pledges remain, the project will move on as described above. If 50%+ cancels - everybody gets refunds to their original addresses.
submitted by readcash to btc [link] [comments]

Complete Guide to OverdriveNTool

We present the complete guide to overclocking GPUs with OverdriveNTool for your Ethereum Mining Rig! In this special we will write a complete guide to OverdriveNTool, in our opinion the most efficient, fast and immediate software for overclocking GPUs dedicated to mining.
The interface is presented in a very simple and no-frills way, as if to suggest how much the program was created to go directly to the purpose.
We remind you that after installing the drivers (see our guide to build a 6 GPU Ethereum Mining Rig) you will need to go through the Radeon Settings (Radeon Settings), select Game, Global Settings and for each GPU in your mining rig (or mining rig) you will need to make sure that HBCC memory is disabled. Do the same with the Crossfire option, checking that it is also disabled. Reboot the system and verify that all video cards have indeed not enabled HBCC and Crossfire before proceeding.
At the following link the software download and technical specifications: https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116/
Recall that the GPUs in Atiflash will numerically correspond to the GPUs in ONT and Claymore, without misalignment.
First we open our BIOS previously modified with Bios Polaris or, possibly, a stable Bios Mod downloaded from specialized sites such as Anorak via ONT. However, we can also overclock the original Bios of the GPU. Follow the OverdriveNTool guide carefully when operating at these levels!
Click on New to create a new profile for the selected GPU. At first you will find yourself on the 0 which will correspond to the 0 in Atiflash and Claymore. I repeat once again: identical GPUs can behave differently; for this reason, the most stable final overclocking may vary from card to card. It will be sufficient to load the first profile on each subsequent tab, select New, make the necessary changes and save it with a different name (possibly recognizable, such as GPU1-OC Memory or GPU2-Temp, etc ...).
The stages of the GPU and Ram. On the left we find the stages or clocks of the GPU with relative voltage for each sector. Some users disable the first 6 stages (from P1 to P6) to ensure that once the command for the minion is executed, the GPU immediately goes to the last stage. For those who, like us, restart the RIG once every 2 or 3 days, or even more, it is an unnecessary procedure.
We recommend, at least for the first tests, to leave them activated. Once you have reached the limit of the video card, you can check whether disabling them will bring some improvement in terms of hashing on the screen without the pool being affected. Because in effect our goal is to have a high hash-rate and with a minimum percentage of errors on the pool even at the expense of a lower hash-rate in our RIG.
In the central part we find the speed of the memory divided into 3 sectors. We will operate directly on the latter.
On the right you can see the speed of the fans, the temperature that the fans must maintain (in our bios-mode it is set at 75 ° to which we obviously never arrived), the acoustic limit (in a RIG it is a parameter to always keep consideration).
The last section at the bottom right, the Power, is divided into the maximum reachable temperature (with our Pulse set at 84 ° while with the XFX at 75 °) and the Power Target, strictly linked to the modified Bios that we are overclocking . You can try at the end of all tests, in the event of instability of one or more GPUs, to give less power starting from -25%.
In this guide we will refer to the XFX RX 580 8GB GDDR5, with GPU clock at 1200Mhz and Memory at 2150Mhz. 8 video cards theoretically identical in total.
Let's put into practice what has been written up to now ...
We immediately opted for blocking the stages by operating directly on the latter for both the GPU Clock and the RAM. From these levels it starts to drop with the voltage of both the GPU and the RAM, alternatively always checking hashing, consumption and the stability of the system (usually 5-10 minutes are enough). When the voltage is too low, the GPU will not start undermining.

The goal is to obtain the best performance / consumption ratio, always parameterizing the results obtained on the pool. A very high hashrate or very low consumption can often create numerous errors in the mining phase.


With 8 RX580 8GB video cards we reached a total consumption (thus including all the components of the RIG) of 770 Watts for an average of less than 100 Watts per GPU.

The result was achieved by bringing the GPU clock voltage to 1000 and the RAM to 900. Lower values ​​are theoretically possible but could cause system instability. As mentioned previously, each video card is different from the others and on one of the eight GPUs we were forced to lower the power by 25%.

After these tweaks, we got results on the pool with a hashrate often higher than 240mhs.


We would like to emphasize that GPU overclocking is the absolute operation that will take you the longest time. It can take hours to reach the so-called "sweet spot" of each video card. Our OverdriveNTool guide will surely help you!

But this achievement will give you great satisfaction, we guarantee it.
Below the stable settings for the RX Vega 64 video cards of our 13 GPU Mining Rig of which you can see some videos on our YouTube channel: https://www.youtube.com/channel/UCdE9TTHAOtyKxy59rALSprA

Complete Guide to OverdriveNTool
See you soon for the next guide dedicated to mining!

If you liked this article and would like to contribute with a donation:

Bitcoin: 1Ld9b165ZYHZcY9eUQmL9UjwzcphRE5S8Z
Ethereum: 0x8D7E456A11f4D9bB9e6683A5ac52e7DB79DBbEE7
Litecoin: LamSRc1jmwgx5xwDgzZNoXYd6ENczUZViK
Stellar: GBLDIRIQWRZCN5IXPIKYFQOE46OG2SI7AFVWFSLAHK52MVYDGVJ6IXGI
Ripple: rUb8v4wbGWYrtXzUpj7TxCFfUWgfvym9xf
By: cryptoall.it Telegram Channel: t.me/giulo75 Netbox Browser: https://netbox.global/PZn5A
submitted by Giulo75 to u/Giulo75 [link] [comments]

Why i’m bullish on Zilliqa (long read)

Edit: TL;DR added in the comments
 
Hey all, I've been researching coins since 2017 and have gone through 100s of them in the last 3 years. I got introduced to blockchain via Bitcoin of course, analyzed Ethereum thereafter and from that moment I have a keen interest in smart contact platforms. I’m passionate about Ethereum but I find Zilliqa to have a better risk-reward ratio. Especially because Zilliqa has found an elegant balance between being secure, decentralized and scalable in my opinion.
 
Below I post my analysis of why from all the coins I went through I’m most bullish on Zilliqa (yes I went through Tezos, EOS, NEO, VeChain, Harmony, Algorand, Cardano etc.). Note that this is not investment advice and although it's a thorough analysis there is obviously some bias involved. Looking forward to what you all think!
 
Fun fact: the name Zilliqa is a play on ‘silica’ silicon dioxide which means “Silicon for the high-throughput consensus computer.”
 
This post is divided into (i) Technology, (ii) Business & Partnerships, and (iii) Marketing & Community. I’ve tried to make the technology part readable for a broad audience. If you’ve ever tried understanding the inner workings of Bitcoin and Ethereum you should be able to grasp most parts. Otherwise, just skim through and once you are zoning out head to the next part.
 
Technology and some more:
 
Introduction
 
The technology is one of the main reasons why I’m so bullish on Zilliqa. First thing you see on their website is: “Zilliqa is a high-performance, high-security blockchain platform for enterprises and next-generation applications.” These are some bold statements.
 
Before we deep dive into the technology let’s take a step back in time first as they have quite the history. The initial research paper from which Zilliqa originated dates back to August 2016: Elastico: A Secure Sharding Protocol For Open Blockchains where Loi Luu (Kyber Network) is one of the co-authors. Other ideas that led to the development of what Zilliqa has become today are: Bitcoin-NG, collective signing CoSi, ByzCoin and Omniledger.
 
The technical white paper was made public in August 2017 and since then they have achieved everything stated in the white paper and also created their own open source intermediate level smart contract language called Scilla (functional programming language similar to OCaml) too.
 
Mainnet is live since the end of January 2019 with daily transaction rates growing continuously. About a week ago mainnet reached 5 million transactions, 500.000+ addresses in total along with 2400 nodes keeping the network decentralized and secure. Circulating supply is nearing 11 billion and currently only mining rewards are left. The maximum supply is 21 billion with annual inflation being 7.13% currently and will only decrease with time.
 
Zilliqa realized early on that the usage of public cryptocurrencies and smart contracts were increasing but decentralized, secure, and scalable alternatives were lacking in the crypto space. They proposed to apply sharding onto a public smart contract blockchain where the transaction rate increases almost linear with the increase in the amount of nodes. More nodes = higher transaction throughput and increased decentralization. Sharding comes in many forms and Zilliqa uses network-, transaction- and computational sharding. Network sharding opens up the possibility of using transaction- and computational sharding on top. Zilliqa does not use state sharding for now. We’ll come back to this later.
 
Before we continue dissecting how Zilliqa achieves such from a technological standpoint it’s good to keep in mind that a blockchain being decentralised and secure and scalable is still one of the main hurdles in allowing widespread usage of decentralised networks. In my opinion this needs to be solved first before blockchains can get to the point where they can create and add large scale value. So I invite you to read the next section to grasp the underlying fundamentals. Because after all these premises need to be true otherwise there isn’t a fundamental case to be bullish on Zilliqa, right?
 
Down the rabbit hole
 
How have they achieved this? Let’s define the basics first: key players on Zilliqa are the users and the miners. A user is anybody who uses the blockchain to transfer funds or run smart contracts. Miners are the (shard) nodes in the network who run the consensus protocol and get rewarded for their service in Zillings (ZIL). The mining network is divided into several smaller networks called shards, which is also referred to as ‘network sharding’. Miners subsequently are randomly assigned to a shard by another set of miners called DS (Directory Service) nodes. The regular shards process transactions and the outputs of these shards are eventually combined by the DS shard as they reach consensus on the final state. More on how these DS shards reach consensus (via pBFT) will be explained later on.
 
The Zilliqa network produces two types of blocks: DS blocks and Tx blocks. One DS Block consists of 100 Tx Blocks. And as previously mentioned there are two types of nodes concerned with reaching consensus: shard nodes and DS nodes. Becoming a shard node or DS node is being defined by the result of a PoW cycle (Ethash) at the beginning of the DS Block. All candidate mining nodes compete with each other and run the PoW (Proof-of-Work) cycle for 60 seconds and the submissions achieving the highest difficulty will be allowed on the network. And to put it in perspective: the average difficulty for one DS node is ~ 2 Th/s equaling 2.000.000 Mh/s or 55 thousand+ GeForce GTX 1070 / 8 GB GPUs at 35.4 Mh/s. Each DS Block 10 new DS nodes are allowed. And a shard node needs to provide around 8.53 GH/s currently (around 240 GTX 1070s). Dual mining ETH/ETC and ZIL is possible and can be done via mining software such as Phoenix and Claymore. There are pools and if you have large amounts of hashing power (Ethash) available you could mine solo.
 
The PoW cycle of 60 seconds is a peak performance and acts as an entry ticket to the network. The entry ticket is called a sybil resistance mechanism and makes it incredibly hard for adversaries to spawn lots of identities and manipulate the network with these identities. And after every 100 Tx Blocks which corresponds to roughly 1,5 hour this PoW process repeats. In between these 1,5 hour, no PoW needs to be done meaning Zilliqa’s energy consumption to keep the network secure is low. For more detailed information on how mining works click here.
Okay, hats off to you. You have made it this far. Before we go any deeper down the rabbit hole we first must understand why Zilliqa goes through all of the above technicalities and understand a bit more what a blockchain on a more fundamental level is. Because the core of Zilliqa’s consensus protocol relies on the usage of pBFT (practical Byzantine Fault Tolerance) we need to know more about state machines and their function. Navigate to Viewblock, a Zilliqa block explorer, and just come back to this article. We will use this site to navigate through a few concepts.
 
We have established that Zilliqa is a public and distributed blockchain. Meaning that everyone with an internet connection can send ZILs, trigger smart contracts, etc. and there is no central authority who fully controls the network. Zilliqa and other public and distributed blockchains (like Bitcoin and Ethereum) can also be defined as state machines.
 
Taking the liberty of paraphrasing examples and definitions given by Samuel Brooks’ medium article, he describes the definition of a blockchain (like Zilliqa) as: “A peer-to-peer, append-only datastore that uses consensus to synchronize cryptographically-secure data”.
 
Next, he states that: "blockchains are fundamentally systems for managing valid state transitions”. For some more context, I recommend reading the whole medium article to get a better grasp of the definitions and understanding of state machines. Nevertheless, let’s try to simplify and compile it into a single paragraph. Take traffic lights as an example: all its states (red, amber, and green) are predefined, all possible outcomes are known and it doesn’t matter if you encounter the traffic light today or tomorrow. It will still behave the same. Managing the states of a traffic light can be done by triggering a sensor on the road or pushing a button resulting in one traffic lights’ state going from green to red (via amber) and another light from red to green.
 
With public blockchains like Zilliqa, this isn’t so straightforward and simple. It started with block #1 almost 1,5 years ago and every 45 seconds or so a new block linked to the previous block is being added. Resulting in a chain of blocks with transactions in it that everyone can verify from block #1 to the current #647.000+ block. The state is ever changing and the states it can find itself in are infinite. And while the traffic light might work together in tandem with various other traffic lights, it’s rather insignificant comparing it to a public blockchain. Because Zilliqa consists of 2400 nodes who need to work together to achieve consensus on what the latest valid state is while some of these nodes may have latency or broadcast issues, drop offline or are deliberately trying to attack the network, etc.
 
Now go back to the Viewblock page take a look at the amount of transaction, addresses, block and DS height and then hit refresh. Obviously as expected you see new incremented values on one or all parameters. And how did the Zilliqa blockchain manage to transition from a previous valid state to the latest valid state? By using pBFT to reach consensus on the latest valid state.
 
After having obtained the entry ticket, miners execute pBFT to reach consensus on the ever-changing state of the blockchain. pBFT requires a series of network communication between nodes, and as such there is no GPU involved (but CPU). Resulting in the total energy consumed to keep the blockchain secure, decentralized and scalable being low.
 
pBFT stands for practical Byzantine Fault Tolerance and is an optimization on the Byzantine Fault Tolerant algorithm. To quote Blockonomi: “In the context of distributed systems, Byzantine Fault Tolerance is the ability of a distributed computer network to function as desired and correctly reach a sufficient consensus despite malicious components (nodes) of the system failing or propagating incorrect information to other peers.” Zilliqa is such a distributed computer network and depends on the honesty of the nodes (shard and DS) to reach consensus and to continuously update the state with the latest block. If pBFT is a new term for you I can highly recommend the Blockonomi article.
 
The idea of pBFT was introduced in 1999 - one of the authors even won a Turing award for it - and it is well researched and applied in various blockchains and distributed systems nowadays. If you want more advanced information than the Blockonomi link provides click here. And if you’re in between Blockonomi and the University of Singapore read the Zilliqa Design Story Part 2 dating from October 2017.
Quoting from the Zilliqa tech whitepaper: “pBFT relies upon a correct leader (which is randomly selected) to begin each phase and proceed when the sufficient majority exists. In case the leader is byzantine it can stall the entire consensus protocol. To address this challenge, pBFT offers a view change protocol to replace the byzantine leader with another one.”
 
pBFT can tolerate ⅓ of the nodes being dishonest (offline counts as Byzantine = dishonest) and the consensus protocol will function without stalling or hiccups. Once there are more than ⅓ of dishonest nodes but no more than ⅔ the network will be stalled and a view change will be triggered to elect a new DS leader. Only when more than ⅔ of the nodes are dishonest (66%) double-spend attacks become possible.
 
If the network stalls no transactions can be processed and one has to wait until a new honest leader has been elected. When the mainnet was just launched and in its early phases, view changes happened regularly. As of today the last stalling of the network - and view change being triggered - was at the end of October 2019.
 
Another benefit of using pBFT for consensus besides low energy is the immediate finality it provides. Once your transaction is included in a block and the block is added to the chain it’s done. Lastly, take a look at this article where three types of finality are being defined: probabilistic, absolute and economic finality. Zilliqa falls under the absolute finality (just like Tendermint for example). Although lengthy already we skipped through some of the inner workings from Zilliqa’s consensus: read the Zilliqa Design Story Part 3 and you will be close to having a complete picture on it. Enough about PoW, sybil resistance mechanism, pBFT, etc. Another thing we haven’t looked at yet is the amount of decentralization.
 
Decentralisation
 
Currently, there are four shards, each one of them consisting of 600 nodes. 1 shard with 600 so-called DS nodes (Directory Service - they need to achieve a higher difficulty than shard nodes) and 1800 shard nodes of which 250 are shard guards (centralized nodes controlled by the team). The amount of shard guards has been steadily declining from 1200 in January 2019 to 250 as of May 2020. On the Viewblock statistics, you can see that many of the nodes are being located in the US but those are only the (CPU parts of the) shard nodes who perform pBFT. There is no data from where the PoW sources are coming. And when the Zilliqa blockchain starts reaching its transaction capacity limit, a network upgrade needs to be executed to lift the current cap of maximum 2400 nodes to allow more nodes and formation of more shards which will allow to network to keep on scaling according to demand.
Besides shard nodes there are also seed nodes. The main role of seed nodes is to serve as direct access points (for end-users and clients) to the core Zilliqa network that validates transactions. Seed nodes consolidate transaction requests and forward these to the lookup nodes (another type of nodes) for distribution to the shards in the network. Seed nodes also maintain the entire transaction history and the global state of the blockchain which is needed to provide services such as block explorers. Seed nodes in the Zilliqa network are comparable to Infura on Ethereum.
 
The seed nodes were first only operated by Zilliqa themselves, exchanges and Viewblock. Operators of seed nodes like exchanges had no incentive to open them for the greater public. They were centralised at first. Decentralisation at the seed nodes level has been steadily rolled out since March 2020 ( Zilliqa Improvement Proposal 3 ). Currently the amount of seed nodes is being increased, they are public-facing and at the same time PoS is applied to incentivize seed node operators and make it possible for ZIL holders to stake and earn passive yields. Important distinction: seed nodes are not involved with consensus! That is still PoW as entry ticket and pBFT for the actual consensus.
 
5% of the block rewards are being assigned to seed nodes (from the beginning in 2019) and those are being used to pay out ZIL stakers. The 5% block rewards with an annual yield of 10.03% translate to roughly 610 MM ZILs in total that can be staked. Exchanges use the custodial variant of staking and wallets like Moonlet will use the non-custodial version (starting in Q3 2020). Staking is being done by sending ZILs to a smart contract created by Zilliqa and audited by Quantstamp.
 
With a high amount of DS; shard nodes and seed nodes becoming more decentralized too, Zilliqa qualifies for the label of decentralized in my opinion.
 
Smart contracts
 
Let me start by saying I’m not a developer and my programming skills are quite limited. So I‘m taking the ELI5 route (maybe 12) but if you are familiar with Javascript, Solidity or specifically OCaml please head straight to Scilla - read the docs to get a good initial grasp of how Zilliqa’s smart contract language Scilla works and if you ask yourself “why another programming language?” check this article. And if you want to play around with some sample contracts in an IDE click here. The faucet can be found here. And more information on architecture, dapp development and API can be found on the Developer Portal.
If you are more into listening and watching: check this recent webinar explaining Zilliqa and Scilla. Link is time-stamped so you’ll start right away with a platform introduction, roadmap 2020 and afterwards a proper Scilla introduction.
 
Generalized: programming languages can be divided into being ‘object-oriented’ or ‘functional’. Here is an ELI5 given by software development academy: * “all programs have two basic components, data – what the program knows – and behavior – what the program can do with that data. So object-oriented programming states that combining data and related behaviors in one place, is called “object”, which makes it easier to understand how a particular program works. On the other hand, functional programming argues that data and behavior are different things and should be separated to ensure their clarity.” *
 
Scilla is on the functional side and shares similarities with OCaml: OCaml is a general-purpose programming language with an emphasis on expressiveness and safety. It has an advanced type system that helps catch your mistakes without getting in your way. It's used in environments where a single mistake can cost millions and speed matters, is supported by an active community, and has a rich set of libraries and development tools. For all its power, OCaml is also pretty simple, which is one reason it's often used as a teaching language.
 
Scilla is blockchain agnostic, can be implemented onto other blockchains as well, is recognized by academics and won a so-called Distinguished Artifact Award award at the end of last year.
 
One of the reasons why the Zilliqa team decided to create their own programming language focused on preventing smart contract vulnerabilities is that adding logic on a blockchain, programming, means that you cannot afford to make mistakes. Otherwise, it could cost you. It’s all great and fun blockchains being immutable but updating your code because you found a bug isn’t the same as with a regular web application for example. And with smart contracts, it inherently involves cryptocurrencies in some form thus value.
 
Another difference with programming languages on a blockchain is gas. Every transaction you do on a smart contract platform like Zilliqa or Ethereum costs gas. With gas you basically pay for computational costs. Sending a ZIL from address A to address B costs 0.001 ZIL currently. Smart contracts are more complex, often involve various functions and require more gas (if gas is a new concept click here ).
 
So with Scilla, similar to Solidity, you need to make sure that “every function in your smart contract will run as expected without hitting gas limits. An improper resource analysis may lead to situations where funds may get stuck simply because a part of the smart contract code cannot be executed due to gas limits. Such constraints are not present in traditional software systems”. Scilla design story part 1
 
Some examples of smart contract issues you’d want to avoid are: leaking funds, ‘unexpected changes to critical state variables’ (example: someone other than you setting his or her address as the owner of the smart contract after creation) or simply killing a contract.
 
Scilla also allows for formal verification. Wikipedia to the rescue: In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
 
Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code.
 
Scilla is being developed hand-in-hand with formalization of its semantics and its embedding into the Coq proof assistant — a state-of-the art tool for mechanized proofs about properties of programs.”
 
Simply put, with Scilla and accompanying tooling developers can be mathematically sure and proof that the smart contract they’ve written does what he or she intends it to do.
 
Smart contract on a sharded environment and state sharding
 
There is one more topic I’d like to touch on: smart contract execution in a sharded environment (and what is the effect of state sharding). This is a complex topic. I’m not able to explain it any easier than what is posted here. But I will try to compress the post into something easy to digest.
 
Earlier on we have established that Zilliqa can process transactions in parallel due to network sharding. This is where the linear scalability comes from. We can define simple transactions: a transaction from address A to B (Category 1), a transaction where a user interacts with one smart contract (Category 2) and the most complex ones where triggering a transaction results in multiple smart contracts being involved (Category 3). The shards are able to process transactions on their own without interference of the other shards. With Category 1 transactions that is doable, with Category 2 transactions sometimes if that address is in the same shard as the smart contract but with Category 3 you definitely need communication between the shards. Solving that requires to make a set of communication rules the protocol needs to follow in order to process all transactions in a generalised fashion.
 
And this is where the downsides of state sharding comes in currently. All shards in Zilliqa have access to the complete state. Yes the state size (0.1 GB at the moment) grows and all of the nodes need to store it but it also means that they don’t need to shop around for information available on other shards. Requiring more communication and adding more complexity. Computer science knowledge and/or developer knowledge required links if you want to dig further: Scilla - language grammar Scilla - Foundations for Verifiable Decentralised Computations on a Blockchain Gas Accounting NUS x Zilliqa: Smart contract language workshop
 
Easier to follow links on programming Scilla https://learnscilla.com/home Ivan on Tech
 
Roadmap / Zilliqa 2.0
 
There is no strict defined roadmap but here are topics being worked on. And via the Zilliqa website there is also more information on the projects they are working on.
 
Business & Partnerships
 
It’s not only technology in which Zilliqa seems to be excelling as their ecosystem has been expanding and starting to grow rapidly. The project is on a mission to provide OpenFinance (OpFi) to the world and Singapore is the right place to be due to its progressive regulations and futuristic thinking. Singapore has taken a proactive approach towards cryptocurrencies by introducing the Payment Services Act 2019 (PS Act). Among other things, the PS Act will regulate intermediaries dealing with certain cryptocurrencies, with a particular focus on consumer protection and anti-money laundering. It will also provide a stable regulatory licensing and operating framework for cryptocurrency entities, effectively covering all crypto businesses and exchanges based in Singapore. According to PWC 82% of the surveyed executives in Singapore reported blockchain initiatives underway and 13% of them have already brought the initiatives live to the market. There is also an increasing list of organizations that are starting to provide digital payment services. Moreover, Singaporean blockchain developers Building Cities Beyond has recently created an innovation $15 million grant to encourage development on its ecosystem. This all suggests that Singapore tries to position itself as (one of) the leading blockchain hubs in the world.
 
Zilliqa seems to already take advantage of this and recently helped launch Hg Exchange on their platform, together with financial institutions PhillipCapital, PrimePartners and Fundnel. Hg Exchange, which is now approved by the Monetary Authority of Singapore (MAS), uses smart contracts to represent digital assets. Through Hg Exchange financial institutions worldwide can use Zilliqa's safe-by-design smart contracts to enable the trading of private equities. For example, think of companies such as Grab, Airbnb, SpaceX that are not available for public trading right now. Hg Exchange will allow investors to buy shares of private companies & unicorns and capture their value before an IPO. Anquan, the main company behind Zilliqa, has also recently announced that they became a partner and shareholder in TEN31 Bank, which is a fully regulated bank allowing for tokenization of assets and is aiming to bridge the gap between conventional banking and the blockchain world. If STOs, the tokenization of assets, and equity trading will continue to increase, then Zilliqa’s public blockchain would be the ideal candidate due to its strategic positioning, partnerships, regulatory compliance and the technology that is being built on top of it.
 
What is also very encouraging is their focus on banking the un(der)banked. They are launching a stablecoin basket starting with XSGD. As many of you know, stablecoins are currently mostly used for trading. However, Zilliqa is actively trying to broaden the use case of stablecoins. I recommend everybody to read this text that Amrit Kumar wrote (one of the co-founders). These stablecoins will be integrated in the traditional markets and bridge the gap between the crypto world and the traditional world. This could potentially revolutionize and legitimise the crypto space if retailers and companies will for example start to use stablecoins for payments or remittances, instead of it solely being used for trading.
 
Zilliqa also released their DeFi strategic roadmap (dating November 2019) which seems to be aligning well with their OpFi strategy. A non-custodial DEX is coming to Zilliqa made by Switcheo which allows cross-chain trading (atomic swaps) between ETH, EOS and ZIL based tokens. They also signed a Memorandum of Understanding for a (soon to be announced) USD stablecoin. And as Zilliqa is all about regulations and being compliant, I’m speculating on it to be a regulated USD stablecoin. Furthermore, XSGD is already created and visible on block explorer and XIDR (Indonesian Stablecoin) is also coming soon via StraitsX. Here also an overview of the Tech Stack for Financial Applications from September 2019. Further quoting Amrit Kumar on this:
 
There are two basic building blocks in DeFi/OpFi though: 1) stablecoins as you need a non-volatile currency to get access to this market and 2) a dex to be able to trade all these financial assets. The rest are built on top of these blocks.
 
So far, together with our partners and community, we have worked on developing these building blocks with XSGD as a stablecoin. We are working on bringing a USD-backed stablecoin as well. We will soon have a decentralised exchange developed by Switcheo. And with HGX going live, we are also venturing into the tokenization space. More to come in the future.”
 
Additionally, they also have this ZILHive initiative that injects capital into projects. There have been already 6 waves of various teams working on infrastructure, innovation and research, and they are not from ASEAN or Singapore only but global: see Grantees breakdown by country. Over 60 project teams from over 20 countries have contributed to Zilliqa's ecosystem. This includes individuals and teams developing wallets, explorers, developer toolkits, smart contract testing frameworks, dapps, etc. As some of you may know, Unstoppable Domains (UD) blew up when they launched on Zilliqa. UD aims to replace cryptocurrency addresses with a human-readable name and allows for uncensorable websites. Zilliqa will probably be the only one able to handle all these transactions onchain due to ability to scale and its resulting low fees which is why the UD team launched this on Zilliqa in the first place. Furthermore, Zilliqa also has a strong emphasis on security, compliance, and privacy, which is why they partnered with companies like Elliptic, ChainSecurity (part of PwC Switzerland), and Incognito. Their sister company Aqilliz (Zilliqa spelled backwards) focuses on revolutionizing the digital advertising space and is doing interesting things like using Zilliqa to track outdoor digital ads with companies like Foodpanda.
 
Zilliqa is listed on nearly all major exchanges, having several different fiat-gateways and recently have been added to Binance’s margin trading and futures trading with really good volume. They also have a very impressive team with good credentials and experience. They don't just have “tech people”. They have a mix of tech people, business people, marketeers, scientists, and more. Naturally, it's good to have a mix of people with different skill sets if you work in the crypto space.
 
Marketing & Community
 
Zilliqa has a very strong community. If you just follow their Twitter their engagement is much higher for a coin that has approximately 80k followers. They also have been ‘coin of the day’ by LunarCrush many times. LunarCrush tracks real-time cryptocurrency value and social data. According to their data, it seems Zilliqa has a more fundamental and deeper understanding of marketing and community engagement than almost all other coins. While almost all coins have been a bit frozen in the last months, Zilliqa seems to be on its own bull run. It was somewhere in the 100s a few months ago and is currently ranked #46 on CoinGecko. Their official Telegram also has over 20k people and is very active, and their community channel which is over 7k now is more active and larger than many other official channels. Their local communities also seem to be growing.
 
Moreover, their community started ‘Zillacracy’ together with the Zilliqa core team ( see www.zillacracy.com ). It’s a community-run initiative where people from all over the world are now helping with marketing and development on Zilliqa. Since its launch in February 2020 they have been doing a lot and will also run their own non-custodial seed node for staking. This seed node will also allow them to start generating revenue for them to become a self sustaining entity that could potentially scale up to become a decentralized company working in parallel with the Zilliqa core team. Comparing it to all the other smart contract platforms (e.g. Cardano, EOS, Tezos etc.) they don't seem to have started a similar initiative (correct me if I’m wrong though). This suggests in my opinion that these other smart contract platforms do not fully understand how to utilize the ‘power of the community’. This is something you cannot ‘buy with money’ and gives many projects in the space a disadvantage.
 
Zilliqa also released two social products called SocialPay and Zeeves. SocialPay allows users to earn ZILs while tweeting with a specific hashtag. They have recently used it in partnership with the Singapore Red Cross for a marketing campaign after their initial pilot program. It seems like a very valuable social product with a good use case. I can see a lot of traditional companies entering the space through this product, which they seem to suggest will happen. Tokenizing hashtags with smart contracts to get network effect is a very smart and innovative idea.
 
Regarding Zeeves, this is a tipping bot for Telegram. They already have 1000s of signups and they plan to keep upgrading it for more and more people to use it (e.g. they recently have added a quiz features). They also use it during AMAs to reward people in real-time. It’s a very smart approach to grow their communities and get familiar with ZIL. I can see this becoming very big on Telegram. This tool suggests, again, that the Zilliqa team has a deeper understanding of what the crypto space and community needs and is good at finding the right innovative tools to grow and scale.
 
To be honest, I haven’t covered everything (i’m also reaching the character limited haha). So many updates happening lately that it's hard to keep up, such as the International Monetary Fund mentioning Zilliqa in their report, custodial and non-custodial Staking, Binance Margin, Futures, Widget, entering the Indian market, and more. The Head of Marketing Colin Miles has also released this as an overview of what is coming next. And last but not least, Vitalik Buterin has been mentioning Zilliqa lately acknowledging Zilliqa and mentioning that both projects have a lot of room to grow. There is much more info of course and a good part of it has been served to you on a silver platter. I invite you to continue researching by yourself :-) And if you have any comments or questions please post here!
submitted by haveyouheardaboutit to CryptoCurrency [link] [comments]

UYT Main-Net pre-launching AMA successfully completed with a blast

7 pm, 29th September 2020 Beijing time the UYT Main-Net pre-launching AMA successfully completed with a blast!
Here is a full record of the AMA:
Host: Hello everyone, it’s a great honor to host the first AMA of UYT network in China. Today, we have invited the person in charge of UYT Dao.
Let’s ask Mr. Woo to introduce himself Woo: Hello, I’m Ben. I’ve met you in the previous global live broadcast. I’m the director of UYT Dao and the founder of IGNISVC. At present, I’m the CEO of the TKNT foundation and have been engaged in the blockchain industry.
Q1. At present, different types of blockchains have emerged, but cross-chain interaction is still suffering a lot. In your opinion, what is the necessity and significance of cross-chain?
Answer: The full name of UYT is to unite all your tokens, which is to integrate all public chains and increase the liquidity of the whole industry. Our purpose is not to create another public chain, but to become a platform for the exchange of value, technology, and resources of all public chains. What we need to solve is that each individual chain can circulate with each other.
The full name of UYT is to unite all your tokens, which is to integrate all public chains and increase the liquidity of the whole industry. Our purpose is not to create another public chain, but to become a platform for the exchange of value, technology, and resources of all public chains. What we need to solve is that each individual chain can circulate with each other.
Q2. The founder of Ethereum, V Shen, once wrote a cross-chain operation report for bank alliance chain R3, which mentioned three cross-chain methods. Which one does UYT belong to? Can you briefly introduce the cross-chain solution of UYT?
Answer: In Vitalik’s cross-chain report, there are three main cross-chain methods. The first is that both parties do not know that they are crossing the chain, or that they cannot “read” each other, such as the centralized exchange. The second way is that one of the links can read other chains, such as side-chain / relay chain. That is, a can read B, and B cannot read a; The third is that both a and B can read each other’s, which can achieve the value and information exchange between a, B, and the platform. UYT belongs to the third kind.
Our new official website will be online soon. Here are a few simple points: first of all, the architecture of UYT includes relay chain, parachain, parathreads, and bridges. In terms of ductility, it has exceeded almost all the public chains currently online.
In the UYT network, there are four kinds of consensus participants, namely collector, fisherman, nominator, and validator. The characteristics of this model are: first, all people can participate without loss. Secondly, as long as anyone makes more contribution to the ecology, he will get more rewards, otherwise, he will receive corresponding punishment.
The underlying layer of UYT is the substrate, which uses the rust programming language. Rust is committed to becoming a programming language that can solve the problems of high concurrency and high-security systems elegantly. This is also a great advantage that we are different from other blockchain projects in technology.
Q3. What are the roles in the UYT network? What are their respective functions?
Answer: After the main network of UYT is online, there will be four roles: collector, fisherman, nominator, and validator, which is totally different from the current system of the test network.
The collector, in short, is responsible for collecting all kinds of information in the parallel chain and packaging the information to the verifier.
Fishermen, to put it bluntly, is fishing law enforcement, which specifically checks out malicious acts and gets rewards after being checked out.
The nominator, in fact, is a group of rights and interests. The verifier is its representative, and they entrust the deposit to the verifier.
Verifier, package new blocks in the network. It must mortgage enough deposits and run a relay chain client on a highly available and high bandwidth machine. It can be understood as a mining pool. It can also be understood as the node in the current UYT DAPP.
Q4. What is the mining mechanism of the UYT network?
The only way to obtain UYT after its issuance is to participate in mining activities. In the initial stage, the daily constant output times of UYT are set to 1440000, and the cycle of bitcoin is halved. Mining rewards can be obtained in the following five ways:
1) Asset pledge mapping mining 2) Become the intermediate chain node of uyt network 3) Recommendation and reward mechanism 4) Voting reward 5) UYT network Dao will take out 10% of gas revenue from block packaging for community construction and reward of excellent community personnel
Q5. The rise and fall of the blockchain are very fast. In order to give investors confidence, is there a detailed development plan, implementation steps, and application direction of UYT network in the next few months?
Answer: UYT Network test network has been running stably for a year. After the main network is launched, all mechanisms will undergo major changes.
The relationship between the UYT test network and the main network can be understood as the relationship between KSM (dot test network) and dot the main network, and the feasibility of the technology can be reflected more quickly by the UYT test network because of its faster timeliness and all future technology updates Some will move to the main network after the stable operation of the test network.
In order to give users a better experience and give more rewards to excellent nodes, all Dao organizers are working hard for it.
The development team has completed the cross-chain of bitcoin and some high-quality Ethereum based tokens in the early stage, and now the code has all been open source. For other mainstream currencies, community members can apply for funds to develop. In order to develop the ecology and make a better technical reserve, we will set up a special ecological development fund when the main network goes online. The transfer bridge is our key funding direction. The maximum application amount of a team is as high as 100000 US dollars. In addition, if other public chains want to connect to UYT, they will get technical support. In order to encourage developers to participate in ecological construction, Dao also launched a series of grants to support development. Developers can directly pull the better applications on Eth and EOS directly, or develop new products according to their own advantages. These directions are now the focus of funding.
Due to the early online testing time of uyt network, it is based on the earlier version of substrate1.0. The on-chain governance mode can only be realized after the upgrade of 2.0 is completed.
At present, the upgrading work is going on steadily, and the on-chain governance will be implemented in the main network with the launch of the uyt main network.
As a heterogeneous cross-chain solution with high scalability and scalability, UYT network can perfectly bridge the parallel encryption system and its encryption assets in theory, and its wide applicability in the future can be expected. Therefore, we do not limit the areas where UYT network will play its advantages and roles. But in the general direction, there will be mainly DEFI and DEX ecological plates. From the industry, it can cover a wide range of fields, not only finance but also games, entertainment, shopping malls, real estate, and so on.
Q6、How can UYT help DEFI?
Answer: UYT network can not only link different public chains but also make parallel chains independent and interlinked. Just like the ACALA project some time ago, it has successfully obtained Pantera capital’s $7 million saft agreement. Although the concept of DEFI is very popular now, all DEFI products are still in the ecology of each public chain, and the cross-chain DEFI ecology has not been developed. UYT is to achieve cross-chain communication, value exchange, and develop truly decentralized financial services and products. For example, cross-chain decentralized flash cash, cross-chain asset support, cross-chain decentralized lending, Oracle machine, and other products. At present, our technical team is also speeding up the construction of infrastructure suitable for the landing of more DEFI products and services and is committed to creating a real cross-chain DEFI ecology, which is only a small step of UYT’s future plan.
Q7、TKNT should be one of the hottest projects in the UYT ecosystem recently. Please give us a brief introduction to the TKNT project and the value of TKNT in the UYT ecosystem. Why can TKNT increase 400 times in 7 days? And what is the cooperative relationship between UTC and TKNT?
Answer: I will answer each project from the technical and resource aspects. Let’s first introduce UTC. UTC is the token of Copernican network and the first project of UYT game entertainment ecology. In the future, it will be responsible for linking. Due to the high-quality public chain in the entertainment industry, because of the limited slots of UYT, each field will seek a high-quality partner and help the partner become the secondary relay chain of UYT. After the main network of UYT goes online, many chains will want to access UYT Greater value circulation, due to the limited external slots of UYT, the cost is also very high. At this time, you can choose to connect to UTC first, and then connect UTC to UYT. With more and more links with UYT, it will gradually evolve into a secondary relay chain of UYT network. UTC’s resources, online and offline, offline payment and offline entity applications, also have a very large community base.
The ecological partners have very good operation experience in the game industry. They will use blockchain technology to change the whole game entertainment industry to make it more transparent and fair. At the same time, there are enough entity consumption scenarios. This is also UYT Because of the reason why the network chose to cooperate with it, the UTC project has been supported by the UYT ecological fund. The support fund includes that after the main network is launched, it will also be the first ecological cooperation project supported by UYT. Because of the online time of the main network of UYT, UTC can’t directly form a chain at present and will give priority to issuing on Ethereum. TKNT is a new concept project TKN.com TKN is the largest online centralized guessing game platform in the world at present. TKNT mixes bet mining and DEFI, so it can carry out fixed mining through platform games, build a system that can realize game participation and in application payment in all Dapps based on ERC20, and combine with various financial services.
The reason why TKNT has created a myth of 400 times in 7 days is that the TkN platform has a buyback plan. As we all know, the online quiz game entertainment platform has an amazing profit. Every quarter, the profit will be used to buyback. The strong profit support has led to the huge increase of token. In the future, all users can use UTC to participate in TkN games. Therefore, the main network of UYT is that Line is also of great significance to TKNT. With the maturity of UYT ecology and technology, TKNT can have a more powerful performance. If TKNT wants to link more public chains, it needs to access UYT network, and realize a bigger vision with cross-chain interaction of UYT. After TKNT was launched on the exchange, the highest price has risen to $14, and now it has dropped to about $2.50. You will see that it will once again set a record high and create greater miracles. You will also see that $3 will be the best buying point for TKNT, because there will be several major moves in TKNT, and the global MLM plan will be launched on October 7 in Korea, China, and other countries There will be many marketing teams in Europe to promote TKNT, including DAPP.com As a shareholder of TkN, TKNT will also make every effort to promote TKNT. Secondly, TKNT will be launched next month on the largest digital currency exchange in South Korea, and Chinese users will see the shadow of TKNT on Binance in November. Of course, the decentralized trading platform of UYT will also be launched in the future.
Q8. What is the significance of the launch of UYT’s main network for the industry and ecology?
Answer: UYT is one of the few cross-chain platform projects in the industry at present.
There are many public chains and coin issuing projects. Why? Because of less work, more money. However, there are very high technical and capital requirements for cross-chain and platform. This barrier is very high, so almost no project side is willing to do this. But once this is done, it will be of great significance to the whole industry of digital currency and blockchain.
Because it will subvert the current situation of the whole currency circle and chain circle acting on their own, and the painting land is king. Let each independent ecosystem achieve a truly decentralized and trust-free cooperative relationship. This huge change will promote the whole industry to develop into a healthy and virtuous circle macro ecosystem.
Q9. The slogan of many project supporters is that UYT should surpass Ethereum. What is the difference in technology between UYT network and Ethereum?
Answer: Thank you so much for supporting UYT. In fact, the correct understanding is that UYT is the next era of Ethereum. First of all, UYT has a different vision from Ethereum.
Before the emergence of UYT, Ethereum, and EOS, no matter how well they developed, belonged to the era of a single chain. The popular metaphor is a LAN. However, UYT can realize the interoperability of each chain and bring the blockchain into the Internet era. Secondly, UYT is far superior to Ethereum in technology. It mainly includes three aspects: shared security, heterogeneous cross-chain, and no fork upgrade.
In the case that Ethereum 2.0 has not been implemented, UYT is the most friendly bottom layer for the DFI projects and other Dapps on Ethereum. Now, the hair chain architecture substrate of UYT is compatible with Ethereum smart contract language solidity, so eth developers can easily migrate their smart contracts to UYT.
Up to now, there is no good solution to the congestion problem of Ethereum, while UYT network not only solves the network congestion problem. What’s more, UYT can easily realize one-click online upgrade, instead of having to redeploy a set of contracts on Ethereum for each version upgraded and then require users to follow them to migrate the original assets from the old contract to the new contract. Developers can quickly and flexibly iterate their own protocols to change their application solutions according to the situation, so as to serve more users and solve more problems. At the same time, they can also repair the loopholes in the contract very quickly. In the case of hacker attacks, they can also solve the hacker stealing money and a series of other problems through parallel chain management. We can find that for Ethereum, UYT not only solves the congestion problem we see in front of us but also provides the most important infrastructure for the future applications such as DFI on Ethereum to truly mature into an open financial application that can serve all people. It also opens the Web 3.0 era of the blockchain industry. In terms of market value, Ethereum currently has a strong ecological construction, with a market value of US $40 billion. UYT will also focus on the development of this aspect after the main network goes online. No matter in terms of market value or ecological construction, I have enough confidence in UYT, after all, we are fully prepared.
Q10. What is the progress of the ecological construction of UYT? What opportunities do current ecological partners see in UYT or what changes may be brought about by UYT ecology?
Answer: After the main network of UYT goes online, there will be a series of ecological construction actions, and more attention will be paid to establishing contact with traditional partners. Cross-chain decentralized flash cash, cross-chain asset support, cross-chain decentralized lending, Oracle machine, and other products will also be the key cooperation direction of UYT.
UYT will give priority to the game and entertainment industry because this industry is most easily subverted by blockchain. As the ecological construction of UYT gets bigger and bigger, the future slots will become more and more expensive. The earlier you join UYT ecology, you will get more support from the ecological fund because the ecological fund is also limited. From the perspective of token value-added, all the project parties will cooperate with the project side in the future, and the project side needs to pledge a certain number of UYT to bid for slots, except for ecological rewards, others need to be purchased from market transactions.
The difference between the pledge here and the pledge we understand is that the UYT of the ecological partner participating in the auction pledge cannot enjoy the computing power for mining.
UYT main network has several opportunities for Eco partners to look forward to, the first point is bitcoin, bitcoin will be later than other assets late, but eventually, all the bubble and value will return to BTC, after the wave of DeFi bubble elimination, the focus will be very much in the bitcoin. UYT ecology can provide a more mature bottom layer for defi. In addition, now Ethereum’s DEFI is that of Ethereum and ERC 20 tokens, and the outbreak point of bitcoin has not yet arrived. Therefore, the DEFI of UYT ecology may be the next opportunity, which is a good opportunity for everyone.
The second opportunity is that after the main network goes online, the future UYT ecological projects will compete to bid for slots. In fact, the original intention of UYT is to realize the interconnection of all chains. The chain outside the UYT ecology also needs to communicate. The third is cross-fi. The BIFI is hatched on Ethereum, and the def on UYT can realize multi-chain operation. For example, TkN games or future UTC game platform users can call bitcoin on the UYT chain. This form only belongs to the decentralized finance in the cross-chain era of UYT, which can be called cross-fi.
Q11. Which exchanges will UYT go online next? What is the online strategy like?
Answer: As the founder of ignisvc and as UYT As the head of the Dao organization, we have always had good cooperative relations with major exchanges all over the world. TKNT will appear in several exchanges one after another. Hitbtc exchange in the United Kingdom, Upbit and Bithumb Exchange in South Korea, Bitfinex exchange in the United States, Binance exchange in China, BKEX exchange, and Kucoin exchange in China are all our partners, and they have been paying close attention to UYT Development, UYT is the public chain with the largest user base and the highest community participation in the cross-chain field, so the future value is immeasurable. If we have to go to the exchange, then we will choose one of the above exchanges to launch. But the vision of UYT is to create a fairer, safer, and transparent circulation in the field of digital currency, and users can master all the assets by themselves, Therefore, in the beginning, there is a simple DEX on the UYT wallet, which is a simple matchmaking transaction and is also an on-chain transaction. After the completion of the UYT DEX, more transactions may occur in the UYT DEX.
However, after the main network of UYT is online, centralized exchanges can directly access the block data synchronization of UYT, and it is not ruled out that some exchanges will directly go online for UYT trading. Such exchanges will not enjoy the support of the ecological support fund of UYT. The network project is a community-led project. Each cooperation plan of the exchange will be carried out in the way shared by the community in the future. Dao organization can only implement it according to the voting results.
Q12. What are the plans for the promotion of ecological development and market by the launch of UYT main network?
Answer: The launch of the main network will be completed around October 15.
On the offline side, due to the epidemic situation, we will jointly organize corresponding market activities with nodes in different countries. At present, there are three large-scale offline meetups that have been identified. We will also start a global roadshow when the epidemic is over.
On the online side, we have opened online Wechat, Kakao, Twitter, Reddit, and telegram communities. We will carry out AMA activities in various countries and promote them all over the world in various ways. Of course, we will launch MLM plans and cooperate with more marketing teams.
submitted by tkntfoundation to u/tkntfoundation [link] [comments]

️ 💲Btcmining best Speed Full Hack Script Mining Bitcoin ... Best Cloud Bitcoin Mining Website 2020!!! Speed Mining.pro ... Legit !! Free Bitcoin Mining 2018 - Mining Speed 10 BHS ... Best ShirMining:New Bitcoin Mining Site0.0005 Btc Bonus ... How To Increase Mining Speed - CryptoTab Browser 100% ...

3. Best Bitcoin mining software CGminer. Pros: Supports GPU/FPGA/ASIC mining, Popular (frequently updated). Cons: Textual interface. Platforms: Windows, Mac, Linux Going strong for many years, CGminer is still one of the most popular GPU/FPGA/ASIC mining software available. CGminer is a command line application written in C. It’s also cross platform, meaning you can use it with Windows ... Fast Bitcoin miner for Laptop. With one button your can start mining bitcoins! Easy bitcoin address setup. Every 4-5 days you can withdraw your mined bitcoins. No fees! Get massive hashing power for mining Bitcoin from your own pc with our unique algorithm. Approximately after 4-5 days you mining 0.005 BTC. There is a round button with a green light and the text "TEST START". When it asks for a user name type in test. Note that in 2017 the bitcoin hashrate of a GPU doesn't matter as all GPUs are too slow to put a dent in bitcoin mining. – Dr.Haribo Jun 23 '17 at 8:16 Mining Speed Test. Test My Download Speed Test My Upload Speed. Other speed tests, especially tests offered by your Internet provider try to eliminate routing factors. This can make your connection appear faster than it really is. If you visit many websites in the North America, Europe, Australia or Asia the results returned here will be a more ... When Bitcoin mining, you only need an internet connection for data syncing, which requires very little in terms of connection strength and bandwidth. There have been instances in which systems have mined Bitcoins successfully with as low as ~500 Kbps, which is nothing - dial-up speeds.

[index] [8735] [388] [3766] [4219] [12449] [10230] [2186] [9684] [5903] [13809]

️ 💲Btcmining best Speed Full Hack Script Mining Bitcoin ...

️ 💲Btcmining.best Speed Full Hack Script_ Mining Bitcoin Script Hack Win 1BTC/perday 100% ️ Link https:..... DO NOT PAY. My payout screen http://prntsc... What it really takes to mine a Bitcoin in 10 Minutes. Firstly I'll show you a special free method to mine Bitcoin and send funds directly to your wallet in 1... increase mining speed Base on Cpu,Buffer and cache Erning More BitCoin - 100% Working Setp 1 - Delete or Uninstall Any Version Of CrytoTabBrowser Step 2 - Do... Speedmining.Net New Free Bitcoin Cloud Mining Site 2020 l 1000 GHS Signup Bonus https://speedmining.net/?r=9458 ...Asslamuaalaikum Friends Welcome To Sk Earn Knowledge. !!!!! !!!!! YouTube Channel !!!! Make Free Bitcoin Cash By Using The Site 👇 link is Given Below Web...

http://forex-german.galiforwharfbul.ml