This module contains all the necessary functions for evaluating different video duplication detection techniques.

#### calc_tf_idf[source]

calc_tf_idf(tfs, dfs)

#### cosine_similarity[source]

cosine_similarity(a, b)

#### hit_rate_at_k[source]

hit_rate_at_k(rs, k)

## Following methods from: https://gist.github.com/bwhite/3726239¶

#### mean_reciprocal_rank[source]

mean_reciprocal_rank(rs)

Score is reciprocal of the rank of the first relevant item

First element is 'rank 1'. Relevance is binary (nonzero is relevant).

rs = [[0, 0, 1], [0, 1, 0], [1, 0, 0]] mean_reciprocal_rank(rs) 0.61111111111111105 rs = np.array([[0, 0, 0], [0, 1, 0], [1, 0, 0]]) mean_reciprocal_rank(rs) 0.5 rs = [[0, 0, 0, 1], [1, 0, 0], [1, 0, 0]] mean_reciprocal_rank(rs) 0.75

Args: rs: Iterator of relevance scores (list or numpy) in rank order (first element is the first item)

Returns: Mean reciprocal rank

#### r_precision[source]

r_precision(r)

Score is precision after all relevant documents have been retrieved

Relevance is binary (nonzero is relevant).

r = [0, 0, 1] r_precision(r) 0.33333333333333331 r = [0, 1, 0] r_precision(r) 0.5 r = [1, 0, 0] r_precision(r) 1.0

Args: r: Relevance scores (list or numpy) in rank order (first element is the first item)

Returns: R Precision

#### precision_at_k[source]

precision_at_k(r, k)

Score is precision @ k

Relevance is binary (nonzero is relevant).

r = [0, 0, 1] precision_at_k(r, 1) 0.0 precision_at_k(r, 2) 0.0 precision_at_k(r, 3) 0.33333333333333331 precision_at_k(r, 4) Traceback (most recent call last): File "", line 1, in ? ValueError: Relevance score length < k

Args: r: Relevance scores (list or numpy) in rank order (first element is the first item)

Returns: Precision @ k

Raises: ValueError: len(r) must be >= k

#### average_precision[source]

average_precision(r)

Score is average precision (area under PR curve)

Relevance is binary (nonzero is relevant).

r = [1, 1, 0, 1, 0, 1, 0, 0, 0, 1] delta_r = 1. / sum(r) sum([sum(r[:x + 1]) / (x + 1.) * delta_r for x, y in enumerate(r) if y]) 0.7833333333333333 average_precision(r) 0.78333333333333333

Args: r: Relevance scores (list or numpy) in rank order (first element is the first item)

Returns: Average precision

#### mean_average_precision[source]

mean_average_precision(rs)

Score is mean average precision

Relevance is binary (nonzero is relevant).

rs = [[1, 1, 0, 1, 0, 1, 0, 0, 0, 1]] mean_average_precision(rs) 0.78333333333333333 rs = [[1, 1, 0, 1, 0, 1, 0, 0, 0, 1], [0]] mean_average_precision(rs) 0.39166666666666666

Args: rs: Iterator of relevance scores (list or numpy) in rank order (first element is the first item)

Returns: Mean average precision

#### recall_at_k[source]

recall_at_k(r, k, l)

Score is recall @ k

Relevance is binary (nonzero is relevant).

r = [0, 0, 1] recall_at_k(r, 1, 2) 0.0 recall_at_k(r, 2, 2) 0.0 recall_at_k(r, 3, 2) 0.5 recall_at_k(r, 4, 2) Traceback (most recent call last): File "", line 1, in ? ValueError: Relevance score length < k

Args: r: Relevance scores (list or numpy) in rank order (first element is the first item) k: the length or size of the relevant items

Returns: Recall @ k

Raises: ValueError: len(r) must be >= k

rs = [[1, 0, 0], [0, 1, 0], [0, 0, 0]]
mean_reciprocal_rank(rs)

r = [1, 1, 0, 1, 0, 1, 0, 0, 0, 1]
average_precision(r)

mean_average_precision(rs)


#### rank_stats[source]

rank_stats(rs)

#### evaluate[source]

evaluate(rankings, top_k=[1, 5, 10])

#### get_eval_results[source]

get_eval_results(evals, app, item)

#### evaluate_ranking[source]

evaluate_ranking(ranking, ground_truth)

from nbdev.export import notebook2script
notebook2script()

Converted 00_prep.ipynb.
Converted 01_features.ipynb.
Converted 02_eval.ipynb.
Converted 03_model.ipynb.
Converted 04_approach.ipynb.
Converted 05_cli.ipynb.
Converted 06_results.ipynb.
Converted 07_utils.ipynb.
Converted 08_combo.ipynb.
Converted index.ipynb.