0 votes
1 view
in Machine Learning by (15.7k points)
I know I am not the only one who does not like progress bars or time estimates which give unrealistic estimates in software. Best examples are installers which jump from 0% to 90% in 10 seconds and then take an hour to complete the final 10%.

Most of the time programmers just estimate the steps to complete a task and then display currentstep/totalsteps as a percentage, ignoring the fact that each step might take a different time to complete. For example, if you insert rows into a database, the insertion time can increase with the number of inserted rows (easy example), or the time to copy files does not only depend on the size of the file but also on the location on the disk and how fragmented it is.

Today, I asked myself if anybody already tried to model this and maybe created a library with a configurable robust estimator. I know that it is difficult to give robust estimates because external factors (network connection, user runs other programs, etc) play their part.

Maybe there is also a solution that uses profiling to set up a better estimator, or one could use machine learning approaches.

Does anybody know of advanced solutions for this problem?

1 Answer

0 votes
by (33.2k points)
There is barely any library that handles estimation, but I can personally suggest you a different way. I once implemented a progress bar that was used to report the progress of a long, complicated file operation. Software to keep track of the time it took for reads, writes and processing, and then adjusted the progress bar accordingly. After the program compilation, the progress bar would move.

This works as long as the time taken for your operations are easily measured. I would be using this method on something like a download progress indicator since the speed of the network is completely indeterminate.
Welcome to Intellipaat Community. Get your technical queries answered by top developers !


Categories

...