![WilliamFiset](/img/default-banner.jpg)
- 144
- 9 305 647
WilliamFiset
United States
Приєднався 23 кві 2012
Educational computer science and mathematics channel for all to enjoy and learn from.
Understanding Mergesort: Sorting Made Simple | Recursion Series
Mergesort is a well-known and efficient sorting algorithm that follows the divide-and-conquer paradigm. It aims to sort a given array or list of elements by dividing it into smaller subarrays, sorting those subarrays recursively, and then merging them back together to obtain the final sorted result.
The algorithm begins by repeatedly dividing the input array into two halves until each subarray contains only one element or is empty. This process is known as the "divide" step. Then, it merges adjacent pairs of these subarrays, comparing and rearranging the elements in a sorted order. This merging process continues until a single sorted array is obtained, hence the term "merge" in mergesort.
One of the key advantages of mergesort is its stability, meaning that elements with equal values retain their original relative order after sorting. This property makes mergesort particularly useful in scenarios where preserving the original order of equal elements is crucial.
Mergesort exhibits a time complexity of O(n log n), where "n" represents the number of elements in the input array. This efficiency makes mergesort suitable for sorting large datasets, and it is often used as a standard benchmark for other sorting algorithms.
Overall, mergesort's simplicity, stability, and efficient performance have made it a popular choice in various applications, ranging from general-purpose sorting to applications in data analysis, parallel computing, and more.
0:00 Mergesort overview
2:15 Algorithm animation
3:35 The divide phase
5:10 The conquer phase
5:53 Mergesort pseudocode
7:46 Merge function
Source code repository:
github.com/williamfiset/algorithms
Video slides:
github.com/williamfiset/algorithms/tree/master/slides
The algorithm begins by repeatedly dividing the input array into two halves until each subarray contains only one element or is empty. This process is known as the "divide" step. Then, it merges adjacent pairs of these subarrays, comparing and rearranging the elements in a sorted order. This merging process continues until a single sorted array is obtained, hence the term "merge" in mergesort.
One of the key advantages of mergesort is its stability, meaning that elements with equal values retain their original relative order after sorting. This property makes mergesort particularly useful in scenarios where preserving the original order of equal elements is crucial.
Mergesort exhibits a time complexity of O(n log n), where "n" represents the number of elements in the input array. This efficiency makes mergesort suitable for sorting large datasets, and it is often used as a standard benchmark for other sorting algorithms.
Overall, mergesort's simplicity, stability, and efficient performance have made it a popular choice in various applications, ranging from general-purpose sorting to applications in data analysis, parallel computing, and more.
0:00 Mergesort overview
2:15 Algorithm animation
3:35 The divide phase
5:10 The conquer phase
5:53 Mergesort pseudocode
7:46 Merge function
Source code repository:
github.com/williamfiset/algorithms
Video slides:
github.com/williamfiset/algorithms/tree/master/slides
Переглядів: 19 997
Відео
Divide and Conquer: The Art of Breaking Down Problems | Recursion Series
Переглядів 30 тис.Рік тому
Divide and Conquer is a powerful algorithmic paradigm that breaks down complex problems into smaller, more manageable subproblems. By conquering each subproblem individually and then merging the solutions, this technique allows us to solve intricate challenges efficiently. It is widely used in various domains, such as computer graphics, data analysis, and computational biology. Source code repo...
Loops and wrapper functions | Recursion Series
Переглядів 4,5 тис.Рік тому
This video explores how to loop through a list using recursion. Source code repository: github.com/williamfiset/algorithms Video slides: github.com/williamfiset/algorithms/tree/master/slides
Multiple Return Values: Unlocking the Power of Recursive Functions | Recursion Series
Переглядів 4,7 тис.Рік тому
In this video, we dive into the fascinating world of recursive functions and learn how to return multiple values from them. Recursive functions are powerful tools that call themselves repeatedly to solve complex problems by breaking them down into smaller, more manageable subproblems. Returning multiple values from a recursive function can be particularly useful when you need to gather and anal...
Strings & Palindromes | Recursion Series
Переглядів 4,8 тис.Рік тому
In this video we explore how to reverse a string using recursion and check whether or not a string is a palindrome using the 'outside-in' method. Source code repository: github.com/williamfiset/algorithms Video slides: github.com/williamfiset/algorithms/tree/master/slides
Recursive multiplication | Recursion series
Переглядів 6 тис.Рік тому
Exploring multiplication tackles implementing multiplication recursively without the use of the multiplication operator or loops. Video slides: github.com/williamfiset/algorithms/tree/master/slides 0:00 Intro 0:41 The multiplication problem 1:13 Understanding multiplication 3:19 Multiplication pseudocode 5:58 Challenge 6:46 Multiplication full solution
Introduction to recursion | Recursion series
Переглядів 12 тис.Рік тому
An introduction to recursion and the components that make up a recursive function including the base case, the recursive call (transition), and the body. Source code repository: github.com/williamfiset/algorithms Video slides: github.com/williamfiset/algorithms/tree/master/slides 0:00 Introduction to recursive functions 1:24 Components of recursive functions 1:57 The base case 2:49 The recursiv...
Tiling problems [2/2] | Dynamic Programming
Переглядів 13 тис.2 роки тому
A generalization of how to solve tiling problems using dynamic programming Previous video: ua-cam.com/video/gQszF5qdZ-0/v-deo.html Tiling problems: projecteuler.net/problem=114 projecteuler.net/problem=115 projecteuler.net/problem=116 projecteuler.net/problem=117 Algorithms code repository: github.com/williamfiset/algorithms Video slides: github.com/williamfiset/algorithms/tree/master/slides My...
Tiling problems [1/2] | Dynamic Programming
Переглядів 38 тис.3 роки тому
An introduction on how to solve tiling problems using dynamic programming Next video: ua-cam.com/video/L1x3an2pl3U/v-deo.html Project Euler practice problems: projecteuler.net/problem=114 projecteuler.net/problem=115 projecteuler.net/problem=116 projecteuler.net/problem=117 Algorithms code repository: github.com/williamfiset/algorithms Video slides: github.com/williamfiset/algorithms/tree/maste...
Narrow Art Gallery | Dynamic Programming
Переглядів 9 тис.3 роки тому
Walkthrough of the Narrow Art Gallery problem that featured in the 2014 ICPC North America qualifier. Narrow Art Gallery problem: open.kattis.com/problems/narrowartgallery Source Code: github.com/williamfiset/Algorithms/tree/master/src/main/java/com/williamfiset/algorithms/dp/examples/narrowartgallery Algorithms code repository: github.com/williamfiset/algorithms Video slides: github.com/willia...
Topological Sort | Kahn's Algorithm | Graph Theory
Переглядів 115 тис.3 роки тому
Source code repository: github.com/williamfiset/algorithms#graph-theory Video slides: github.com/williamfiset/algorithms/tree/master/slides Website: www.williamfiset.com Audio intro/outro composed by Richard Saney (rnsaney@gmail.com) 0:00 Intro 0:22 Topological sort example 2:09 Topological sort motivation 2:37 Topological ordering 3:36 Directed acyclic graphs 4:31 A case against cycles 5:36 Ka...
Graph Theory Algorithms
Переглядів 225 тис.4 роки тому
Graph Theory algorithms video series Support me by purchasing the full graph theory playlist on Udemy. This version offers additional problems, exercises and quizzes not available on UA-cam: www.udemy.com/course/graph-theory-algorithms Graph Theory video series playlist on UA-cam: ua-cam.com/play/PLDV1Zeh2NRsDGO4 qE8yH72HFL1Km93P.html Topics covered in these videos include: how to store and rep...
Mountain Scenes | Dynamic Programming
Переглядів 10 тис.4 роки тому
Walkthrough of the Mountain Scenes dynamic programming problem that featured in the 2016 NAIPC programming competition. Mountain Scenes problem: open.kattis.com/problems/scenes Source Code: github.com/williamfiset/Algorithms/tree/master/src/main/java/com/williamfiset/algorithms/dp/examples/scenes Algorithms code repository: github.com/williamfiset/algorithms Video slides: github.com/williamfise...
Tiling Dominoes and Trominoes (Leetcode 790) | Dynamic Programming
Переглядів 29 тис.4 роки тому
Explanation video on how to tile a 2xN grid with dominoes and L shaped trominoes. Leetcode problem: leetcode.com/problems/domino-and-tromino-tiling/ Source code: github.com/williamfiset/Algorithms/tree/master/src/main/java/com/williamfiset/algorithms/dp/examples/domino-and-tromino-tiling Algorithms repo: github.com/williamfiset/algorithms Previous video on tiling dominoes: ua-cam.com/video/yn2j...
Tiling dominoes | Dynamic programming
Переглядів 40 тис.4 роки тому
Walkthrough of dynamic programming on how to tile dominoes on a grid. Problem: open.kattis.com/problems/tritiling Source code: github.com/williamfiset/Algorithms/tree/master/src/main/java/com/williamfiset/algorithms/dp/examples/tilingdominoes Video slides: github.com/williamfiset/algorithms/tree/master/slides Website: www.williamfiset.com
Magic Cows | Dynamic Programming | Adhoc | Interview problem
Переглядів 42 тис.4 роки тому
Magic Cows | Dynamic Programming | Adhoc | Interview problem
Tarjan's Strongly Connected Component (SCC) Algorithm (UPDATED) | Graph Theory
Переглядів 141 тис.4 роки тому
Tarjan's Strongly Connected Component (SCC) Algorithm (UPDATED) | Graph Theory
Indexed Priority Queue (UPDATED) | Data Structures
Переглядів 25 тис.4 роки тому
Indexed Priority Queue (UPDATED) | Data Structures
Lowest Common Ancestor (LCA) Problem | Source Code
Переглядів 11 тис.4 роки тому
Lowest Common Ancestor (LCA) Problem | Source Code
Lowest Common Ancestor (LCA) Problem | Eulerian path method
Переглядів 42 тис.4 роки тому
Lowest Common Ancestor (LCA) Problem | Eulerian path method
Sparse Table Data Structure | Source Code
Переглядів 6 тис.4 роки тому
Sparse Table Data Structure | Source Code
Identifying Isomorphic Trees | Source Code | Graph Theory
Переглядів 13 тис.4 роки тому
Identifying Isomorphic Trees | Source Code | Graph Theory
Identifying Isomorphic Trees | Graph Theory
Переглядів 29 тис.4 роки тому
Identifying Isomorphic Trees | Graph Theory
Introduction to tree algorithms | Graph Theory
Переглядів 82 тис.4 роки тому
Introduction to tree algorithms | Graph Theory
Beginner tree algorithms | Graph Theory
Переглядів 36 тис.4 роки тому
Beginner tree algorithms | Graph Theory
Eager Prim's Minimum Spanning Tree Algorithm | Source Code
Переглядів 5 тис.5 років тому
Eager Prim's Minimum Spanning Tree Algorithm | Source Code
Eager Prim's Minimum Spanning Tree Algorithm | Graph Theory
Переглядів 24 тис.5 років тому
Eager Prim's Minimum Spanning Tree Algorithm | Graph Theory
seriously, these videos are supper good
thankyou!!!! you are the best!
as we do more and more union-find operations, the path gets even more compressed making it efficient . such a wonderful concept .
we use this concept in implementation of segment trees .
this concept is used in segment trees
Note that Singly Linked List also can have Head and Tail Node, But only one pointer to next node. When remove at tail in Singly, We can remove last Node using Tail Node. But Tail Node doesn't know what is previous Node only know next is NULL. So we have to throw away Tail node after removing that.
great explanation
.....................................................................................................................................................................
Extremely good job of explaining! Some people are gifted at teaching! (I know it's not a "gift" it's the result of mainly hard work of looking at problems a zillion times from different perspectives and finding a suitable one to tell the new comer)
owo
4:50 how is the predessecor is 7? it should be 8
Oh how I hate all of you, PHD supremacists, with all your self absorbed big big words, super-scientific encryption from planet Jupiter... "Thiss blue arrow incorporates that triangulation within dissonance of Pythagorean uncertainty of current white arrow that is equilibrium of the cosmic entropy... " Then they are saying why beginners hate any type of science and give up on everything... dude 2 + 2 = 4 that's it ! All this could be simply said is that CURRENT capacity value = [previous capacity (one row above, but same column #) - newer/current given capacity(weight)] add number(value it holds) to newer/current number(value) and that's the value that gets to current cell.....But nooooo it has to go trough rigorous BS
Thank you so much it is extremely helpful. One image's worth 1000 of words
I was having a really hard time with DP but it's so much easier to imagine the solution every dp problem as the shortest paths tree of a directed acyclic graph. so in this case it would follow the topological sort and ONLY update cell (or relax it) if DP's condition is met. In cases of actual graphs this is exactly what we do to find shortest paths tree for a DAG.
Why have you used Object instead of Generics T as arguments in the remove(), indexOf(), and contains() methods?
Brilliant video, thank you so much for these. I think you could also modify solve() to return a prev collection as in the BFS Shortest Path video, and add the end cell coordinates or marker character ('E') as a parameter to the enclosing function. Then you no longer need all the global counting variables (nodes_left_in_layer etc). You would need to perform [number of steps in shortes path] iterations over prev to get the shortest path and return shortest_path.size() - 1. Maybe not optimal since you have to iterate over the shortest path too, but as long as you still break inside solve() when reached_end = True, the time complexity should stay the same.
This channel is amazing but youre a sick individual for putting on socks first before anything else
Is that always lawful to change the root of the heap?
great explanation sir, thank you
Thx
You sound like MoistCritikal before he got into chain smoking
ape dzveris chhi
Great explanation, thank you :)
......WOW
None of C++ STL, Java Collections and Python Collections have Indexed Priority Queue. Golang has a 'Fix' method, which allows updating a value, but it does not have an index function to find the value and it's not obvious how to find items after a Fix call, which takes an index as input.
8:23 why does it say 11, not10?
Great explanation! Thank you.
Very clearly explained. You need V-1 times in the worst case. However, if in a round there were no relaxations performed, obviously the next round will also not have any too, so you can stop by detecting the number of relaxations made in a round to be zero.
Satisfactory
at 16:45 we are setting low[node] to ids[at] would it not have already happened above when low[at] = min(low[at], low[to]) this seems redundant. what am I missing??
got the case 0 -> [1], 1 -> [0, 2], 2 -> [1] here we have low link value to start will be [0, 1, 2] we start traverse from 0 and dfs to 1. at one we may do either of two ways 1. traverse to 0 then 1 2. traverse to 1 then 0 if it is 1st case then all is good 2 will also get a low link value of 0 in 2nd case 2 is traverse and marked low link value of 1 as 1 has low link value of 1 at that point. 1 will then traverse 0 later and get a low link value of 0 but it will not be propagated to 2 at that point. hence we will need to re assign low link values to stack once we come back to original node which we started with if it’s low link value is itself. to ensure we propagate low value correctly we would have to choose neighbour we already visited first which will increase the time complexity
🎯 Key points for quick navigation: 00:01 *🔑 Explains the union and find operations in a union-find data structure.* 00:17 *📝 Creates a bijective mapping between objects and integers to construct an array-based union-find.* 01:05 *🗃️ Stores the mappings in a hash table for efficient lookup.* 01:34 *📐 Initially, each node in the array points to itself as a root node.* 02:30 *🔗 To unify objects, the smaller component's root node is set as the parent of the larger component's root node.* 04:34 *🧰 Explains the process of merging smaller components into larger ones during unification.* 06:24 *🌳 If two objects belong to the same component, no unification is needed.* 07:42 *🔍 To find the component an element belongs to, follow the parent nodes until reaching the root node.* 08:41 *🚩 The number of components equals the number of remaining root nodes and never increases.* 09:48 *⚡ Path compression, covered in the next video, improves the time complexity of union-find operations.* Made with HARPA AI
what the fuck are you talking about
can anybody tell me the meaning of the line state = subset ^ (1<<next) in 14:59
great explanation thank you so much🥰🥰
thank you so much
This is a great explanation!
What are more complex areas of graph theory beyond this or connection to other parts of mathematics?
It was great how you explained all the states and how we could compute the current states from previous states, but as soon as I saw the long and messy code, I lost my interest completely. You could have used a simple code with an array of length 4 containing all possible states and computing the values based on the last array(state)
this looks like implementation of ArrayList in java. Is that right?
brilliant explanation = thank you so much
Dude this rocks..
WATCH IN 1.5X SPEED 🗣🗣🗣🗣
Golden Standards for teaching! Great work man!
Beautiful! :)
subscribe++;
What I don’t understand if you could please explain is why h-1 is used on every subtree, so for example, x root, go right have a z node, left of z node is another Y node, left of the Y node we have just a triangle (working on double rotation) which has a height of h-1, right of the Y node we have another triangle which has the height of h-2, right of Z node we have a triangle which is again h-1. Left of root x we have one triangle that is h-1, now where do these h-2 and h-1 come from. Also when checking the imbalance, a h+1 comes in to make it an imbalance of 2. How does this work, I can’t wrap my head around it. PLEASE HELP
What I don’t understand if you could please explain is why h-1 is used on every subtree, so for example, x root, go right have a z node, left of z node is another Y node, left of the Y node we have just a triangle (working on double rotation) which has a height of h-1, right of the Y node we have another triangle which has the height of h-2, right of Z node we have a triangle which is again h-1. Left of root x we have one triangle that is h-1, now where do these h-2 and h-1 come from. Also when checking the imbalance, a h+1 comes in to make it an imbalance of 2. How does this work, I can’t wrap my head around it. PLEASE HELP
I hate algorithms. nice video tho
Thank you very much sir!!
Wow, just wow! On a side note, I think it's better to consider the position of the LSB in the 0-indexed way, since in that case an index is directly responsible for 2^(0-indexed position of LSB) element below it, inclusive of itself. Then cascading to the appropriate next element is intuitive, just remove the LSB. Essentially going down 2^(0-indexed position of LSB) :D. Anyways, fantastic job!