
We shall also plot the same to represent it visually. But it is also important to check the algorithm performance in terms of speedy computation and time.įor this, we will run both the functions in a loop for a defined set of matrix shapes and store the required amount of time that each takes. Now, both the functions seem working well enough.
Multiply a scalar to one vector code code#
Improved Code > mm = optimized_product(m1, m2) > print(mm), ] Speed and Computation - Comparison

Normal Function > mm = easy_product(m1, m2) > print(mm), ] Random matrices creation import random def create_matrix(rcount, ccount): ed(10) m = for j in range(rcount) ] return m Testing Phase > nr = 2 > nc = 3 > m1 = create_matrix(nr, nc) > m2 = create_matrix(nc, nr) > m1, ] > m2, , ] We will create random matrices (function) which can further be helpful to check the speed compatibility of both functions. In order to test so, we need to have matrices defined. Wrap Up def optimized_product(m1, m2): transpose = lambda m : list(map(list, zip(*m))) scalar_product = lambda r, c: sum() def mats_product(m1, m2): m2_t = transpose(m=m2) mats_p = for r in m1 ] return mats_p return mats_product(m1, m2)Īwesome! Both the functions are ready to be tested. The above scalar_product() can still be reduced and maintained like - scalar_product = lambda r, c: sum() Matrix Transpose transpose = lambda m : list(map(list, zip(*m))) Better improvement is needed on the method transpose(). The above easy_product() can still be optimized by using the built-in methods of Python. Wrap Up def easy_product(m1, m2): def transpose(m): trans_mat = for row in m] for i in range(len(m)) ] return trans_mat def scalar_product(r, c): ps = return sum(ps) def mats_product(m1, m2): m2_t = transpose(m=m2) mats_p = for r in m1 ] return mats_p return mats_product(m1, m2) More simplified code - (Optimized) To wrap all the functions together, we can do the following. Matrix Multiplication def mats_product(m1, m2): m2_t = transpose(m=m2) mats_p = for r in m1] return mats_p Matrix Transpose def transpose(m): trans_mat = for row in m] for i in range(len(m))] return trans_matĭot Product def scalar_product(r, c): ps = return sum(ps) If we try to break down the whole algorithm, the very first thing that we have to do is to transpose one of the matrices and compute the dot or scalar product for each row of the matrix to each column of the other matrix. Mathematically, we represent it in the form of. Dot Product - It is an algebraic operation that is computed on two equal-sized vectors which result in a single number.Each column of B is considered to be a column vector.Each row of A is considered to be a row vector.This process continues until there are no elements left to compute. We shall compute the dot product for each row of A with respect to each column of B. The general rule before multiplying is that the number of COLUMNS of A should be exactly equal to the number of ROWS of B. Algorithm Explanationįor instance, let’s assume we have two matrices A and B. Note - Matrices of shapes ( 1 x N) and ( N x 1) are generally called row vector and column vector respectively. Column Matrix - Collection of identical elements or objects stored in N rows and 1 column.Row Matrix - Collection of identical elements or objects stored in 1 row and N columns.Thus the shape of any typical matrix is represented or assumed to have ( M x N) dimensions. The shape of the matrix is generally referred to as dimension. Yes, a matrix is a 2D representation of an array with M rows and N columns. If you had read my previous articles on matrix operations, by now you would have already know what a matrix is.

Photo by Carl Nenzen Loven on Unsplash Introduction
