big o - Big o notation work -
i new in time complexity using big-o notation have 3 examples , tries figure out big(o)
sum = 0; for(i=0; i<m/3; i++){ system.out.println(“*”); for(j=0; j<n; j++) for(k=0; k<10; k=k+2) sum++; }
i think 1 o(mn), first loop works m/3 times, second loop works n times, third loops works 10 times o(mn)
sum = 100; for(i=0; i<n; i++){ sum++; } for(j=10; j<n; j++) for(k=1; k<m; k*=2) sum++;
big-o = o(log(m)), number of operations executed being n + ( (n - 10) * log(m) )
sum = 0; for(i=2; i<n; i++){ for(j=3; j<n; j+=2){ sum++; }} for(k=1; k<m; k*=2) sum++;
here think big-o = log(m)n^2 ???
is correct???
here is:
- o(m/3 * n * 5) = o(mn * c) = o(mn)
- o(n + (n - 10) * log(m)) = o(n*log(m))
- o((n-2)*(n-3)/2 + log(m)) = o(n2 + log(m)) = o(n2)
next time, please, format code blocks defined braces clearer.
in 3. o(n2 + log(m)) → o(n2) because f(x) = x2 > g(x) = log(x), when x → +∞.
upd. code (formatted little bit nicer):
sum = 0; // let's go inner loop: (n - 3)/2 actions or simpler n/2 or n for(i=2; i<n; i++) { for(j=3; j<n; j+=2) { sum++; } }
because in big-o constants don't matter, i.e. o(5) = o(1) or o(18 * x5) = o(x5).
, example that's why log
in big-o doesn't have base: o(log2x) = o(log(x) / log(2)) = o(log(x) * const) = o(log(x)), log
- natural logarithm (base e
)
let's go again:
sum = 0; // n actions in inner loop n times. it's o(n^2) for(i=2; i<n; i++) { for(j=3; j<n; j+=2) { sum++; } } // there're log(m) actions for(k=1; k<m; k*=2) { sum++; }
so sum it: o(n2 + log(m)). let's take look functions x2 , log(x). see, x2 grows faster log(x). proof can achieved researching limit of l(x) = log(x) / x2, when x → +∞. equals zero.
that's why in sum x2 + log(x) first term dominates. [x2 + log(x)] / x2 = 1 + o-small(x), i.e. they're equal in terms of complexity. that's why o(n2 + log(m)) = o(n2).
original equation has 2 different variables , it's better leave - o(n2 + log(m)), because it's clearer how complexity depends on every variable.
Comments
Post a Comment