The goal of the first [AnTeDe](https://moodle.msengineering.ch/course/view.php?id=2222) lab is to run simple operations for text analysis using the [NLTK](http://www.nltk.org/) toolkit. You will use the environment that you set up following the instructions of the previous notebook: [Python 3](https://www.python.org/) with [Jupyter](https://jupyter.org/) notebooks.
The goal of the first [AnTeDe](https://moodle.msengineering.ch/course/view.php?id=2222) lab is to run simple operations for text analysis using the [NLTK](http://www.nltk.org/) toolkit. You will use the environment that you set up following the instructions of the previous notebook: [Python 3](https://www.python.org/) with [Jupyter](https://jupyter.org/) notebooks.
You will use NLTK functions to get texts from the web and segment (split) them into sentences and words (also called *tokens*). You will also experiment with extracting statistics about the texts.
You will use NLTK functions to get texts from the web and segment (split) them into sentences and words (also called *tokens*). You will also experiment with extracting statistics about the texts.
To submit your practical work, please execute anew all cells of this notebook via "Runtime > Restart and run all", then save it, zip it, and submit it as homework on the [AnTeDe Moodle page](https://moodle.msengineering.ch/course/view.php?id=2222).
To submit your practical work, please execute anew all cells of this notebook via "Runtime > Restart and run all", then save it, zip it, and submit it as homework on the [AnTeDe Moodle page](https://moodle.msengineering.ch/course/view.php?id=2222).
<fontcolor='green'>Please answer the questions in green within this notebook, and submit the completed notebook under the corresponding homework on Moodle.</font>
<fontcolor='green'>Please answer the questions in green within this notebook, and submit the completed notebook under the corresponding homework on Moodle.</font>
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## NLTK: the Natural Language (Processing) Toolkit
## NLTK: the Natural Language (Processing) Toolkit
Please add NLTK to your Python installation, by following the installation instructions at the [NLTK website](http://www.nltk.org/install.html). A good way to get started is to look at [Chapter 1](http://www.nltk.org/book/ch01.html) of the [NLTK book (NLP with Python)](http://www.nltk.org/book/) and to try some of the instructions there.
Please add NLTK to your Python installation, by following the installation instructions at the [NLTK website](http://www.nltk.org/install.html). A good way to get started is to look at [Chapter 1](http://www.nltk.org/book/ch01.html) of the [NLTK book (NLP with Python)](http://www.nltk.org/book/) and to try some of the instructions there.
The online edition is updated for Python 3, but the printed book, also available in PDF on some websites, is only for Python 2 ([Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit, Steven Bird, Ewan Klein, and Edward Loper, O'Reilly Media, 2009](http://shop.oreilly.com/product/9780596516499.do)).
The online edition is updated for Python 3, but the printed book, also available in PDF on some websites, is only for Python 2 ([Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit, Steven Bird, Ewan Klein, and Edward Loper, O'Reilly Media, 2009](http://shop.oreilly.com/product/9780596516499.do)).
To use NLTK in Jupyter, all you need is to `import nltk` before you need it. You must use the prefix `nltk.` unless you write for instance: `from nltk.book import *` which will import and define several text collections (a.k.a corpora). NLTK can download from the associated website a large number of corpora. NLTK has a download manager which can be called from a Python interpreter (not a notebook) using `nltk.download()`.
To use NLTK in Jupyter, all you need is to `import nltk` before you need it. You must use the prefix `nltk.` unless you write for instance: `from nltk.book import *` which will import and define several text collections (a.k.a corpora). NLTK can download from the associated website a large number of corpora. NLTK has a download manager which can be called from a Python interpreter (not a notebook) using `nltk.download()`.
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
importnltk
importnltk
#from nltk.book import *
#from nltk.book import *
#Used to download nltk stuff on macOS
#Used to download nltk stuff on macOS
importos,ssl
importos,ssl
if (notos.environ.get('PYTHONHTTPSVERIFY','')andgetattr(ssl,'_create_unverified_context',None)):
if (notos.environ.get('PYTHONHTTPSVERIFY','')andgetattr(ssl,'_create_unverified_context',None)):
<fontcolor='green'>**Question**: To verify your NLTK library, please define a list of words called `sentence1`, print its length (`len()`) and use `nltk.bigrams` to generate all bigrams from it, i.e. pairs of consecutive words.</font>
<fontcolor='green'>**Question**: To verify your NLTK library, please define a list of words called `sentence1`, print its length (`len()`) and use `nltk.bigrams` to generate all bigrams from it, i.e. pairs of consecutive words.</font>
You can see an example in [Sec. 3.3 of Ch. 1 of the NLTK book](http://www.nltk.org/book/ch01.html#collocations-and-bigrams). Please also sort bigrams alphabetically.
You can see an example in [Sec. 3.3 of Ch. 1 of the NLTK book](http://www.nltk.org/book/ch01.html#collocations-and-bigrams). Please also sort bigrams alphabetically.
## Using NLTK to download, tokenize, and save a text
## Using NLTK to download, tokenize, and save a text
<fontcolor='green'>**Question**: Using inspiration from [Chapter 3 (3.1. Processing Raw Text) of the NLTK book](http://www.nltk.org/book/ch03.html), get the book "Crime and Punishment" (see. file 2554) from the Gutenberg Project in text format.
<fontcolor='green'>**Question**: Using inspiration from [Chapter 3 (3.1. Processing Raw Text) of the NLTK book](http://www.nltk.org/book/ch03.html), get the book "Crime and Punishment" (see. file 2554) from the Gutenberg Project in text format.
Print its length and and comment whether this refers to bytes or characters. </font>
Print its length and and comment whether this refers to bytes or characters. </font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# from urllib import request # you may need to: pip install urllib
# from urllib import request # you may need to: pip install urllib
importrequests# urllib did not install with pip
importrequests# urllib did not install with pip
# Please write your Python code below and execute it.
# Please write your Python code below and execute it.
FILE_PATH="./CrimeAndPunishment.txt"
FILE_PATH="./CrimeAndPunishment.txt"
text_crime_and_punishment=""
text_crime_and_punishment=""
ifnotos.path.exists(FILE_PATH):# Cache the file on local drive if it does not exist
ifnotos.path.exists(FILE_PATH):# Cache the file on local drive if it does not exist
print(len(text_crime_and_punishment))# Refers to the number of characters in the text
print(len(text_crime_and_punishment))# Refers to the number of characters in the text
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
If you are curious about "special" characters, you can refer to [Python's documentation of Unicode support](https://docs.python.org/3.8/howto/unicode.html).)
If you are curious about "special" characters, you can refer to [Python's documentation of Unicode support](https://docs.python.org/3.8/howto/unicode.html).)
We now want to keep only the meaningful text from the book, without the header and the final license.
We now want to keep only the meaningful text from the book, without the header and the final license.
<fontcolor='green'>**Question**: Determine how much you should trim from the beginning and from the end in order to keep only the actual text of the book. <br>
<fontcolor='green'>**Question**: Determine how much you should trim from the beginning and from the end in order to keep only the actual text of the book. <br>
The book starts after: "\*\*\* START OF THE PROJECT GUTENBERG EBOOK CRIME AND PUNISHMENT \*\*\*" <br>
The book starts after: "\*\*\* START OF THE PROJECT GUTENBERG EBOOK CRIME AND PUNISHMENT \*\*\*" <br>
The book ends before: "\*\*\* END OF THE PROJECT GUTENBERG EBOOK CRIME AND PUNISHMENT \*\*\*"
The book ends before: "\*\*\* END OF THE PROJECT GUTENBERG EBOOK CRIME AND PUNISHMENT \*\*\*"
Print out the resulting start and end indices and save the result into a new string.</font>
Print out the resulting start and end indices and save the result into a new string.</font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Please write your Python code below and execute it.
# Please write your Python code below and execute it.
START_SEQUENCE="*** START OF THE PROJECT GUTENBERG EBOOK CRIME AND PUNISHMENT ***"
START_SEQUENCE="*** START OF THE PROJECT GUTENBERG EBOOK CRIME AND PUNISHMENT ***"
END_SEQUENCE="*** END OF THE PROJECT GUTENBERG EBOOK CRIME AND PUNISHMENT ***"
END_SEQUENCE="*** END OF THE PROJECT GUTENBERG EBOOK CRIME AND PUNISHMENT ***"
We will first segment the text into sentences, then tokenize each sentence, i.e. segment it into tokens (words and punctuations). We can also tokenize the entire text without segmenting it into sentences first. We will use the following NLTK functions:
We will first segment the text into sentences, then tokenize each sentence, i.e. segment it into tokens (words and punctuations). We can also tokenize the entire text without segmenting it into sentences first. We will use the following NLTK functions:
*`nltk.sent_tokenize(...)` (documented [here](https://www.nltk.org/api/nltk.tokenize.html#nltk.tokenize.word_tokenize)) (usually, only word segmentation is called *tokenization*, but NLTK uses this name for both functions)
*`nltk.sent_tokenize(...)` (documented [here](https://www.nltk.org/api/nltk.tokenize.html#nltk.tokenize.word_tokenize)) (usually, only word segmentation is called *tokenization*, but NLTK uses this name for both functions)
<fontcolor='green'>**Question**: Segment the text into sentences with NLTK, display the number of sentences, and display the five sentences between 500 and 504.</font>
<fontcolor='green'>**Question**: Segment the text into sentences with NLTK, display the number of sentences, and display the five sentences between 500 and 504.</font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Please write your Python code in this cell and execute it.
# Please write your Python code in this cell and execute it.
```
```
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# If needed, here is how to save the result, with one sentence per line.
# If needed, here is how to save the result, with one sentence per line.
importos
importos
filename1="sample_text_1.txt"
filename1="sample_text_1.txt"
# For a local file, this is the relative path with respect to the notebook
# For a local file, this is the relative path with respect to the notebook
# In Colab, use a path like this: /content/gdrive/My Drive/sample_text_1.txt
# In Colab, use a path like this: /content/gdrive/My Drive/sample_text_1.txt
ifos.path.exists(filename1):
ifos.path.exists(filename1):
os.remove(filename1)
os.remove(filename1)
fd=open(filename1,'a',encoding='utf8')
fd=open(filename1,'a',encoding='utf8')
forsinsentences1:
forsinsentences1:
fd.write(s+'\r\n')
fd.write(s+'\r\n')
fd.close()
fd.close()
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<fontcolor='green'>**Question**: Segment each sentence into tokens (i.e., words and punctuations), store the result in a new variable (a list of lists), and display the same five sentences as above.</font>
<fontcolor='green'>**Question**: Segment each sentence into tokens (i.e., words and punctuations), store the result in a new variable (a list of lists), and display the same five sentences as above.</font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Please write your Python code in this cell and execute it.
# Please write your Python code in this cell and execute it.
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<fontcolor='green'>**Question**: Indicate how many tokens there are in total.</font>
<fontcolor='green'>**Question**: Indicate how many tokens there are in total.</font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Please write your Python code in this cell and execute it.
# Please write your Python code in this cell and execute it.
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<fontcolor='green'>**Question**: Tokenize the initial text without segmenting it into sentences, and compare the resulting total number of tokens with the one obtained above.</font>
<fontcolor='green'>**Question**: Tokenize the initial text without segmenting it into sentences, and compare the resulting total number of tokens with the one obtained above.</font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Please write your Python code in this cell and execute it.
# Please write your Python code in this cell and execute it.
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<fontcolor='green'>**Question**: Find the size of the vocabulary of your text (the unique *word types*) by converting the list of words (the *tokens*) to a Python `set`. Note that these *types* include punctuations and other symbols found through tokenization, and upper/lower case letters are different. Display all words longer than 15 characters and not containing a hyphen (-). </font>
<fontcolor='green'>**Question**: Find the size of the vocabulary of your text (the unique *word types*) by converting the list of words (the *tokens*) to a Python `set`. Note that these *types* include punctuations and other symbols found through tokenization, and upper/lower case letters are different. Display all words longer than 15 characters and not containing a hyphen (-). </font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Please write your Python code in this cell and execute it.
# Please write your Python code in this cell and execute it.
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<fontcolor='green'>**Question**: What is the type-token ratio (TTR) of your text?</font>
<fontcolor='green'>**Question**: What is the type-token ratio (TTR) of your text?</font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Please write your Python code below and execute it.
# Please write your Python code below and execute it.
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## Computing statistics with NLTK
## Computing statistics with NLTK
You can create a `nltk.Text` object from the tokens of the text, without sentence segmentation. This enables you to compute statistics using NLTK functions. [Chapter 1 of the NLTK book](http://www.nltk.org/book/ch01.html) provides examples of operations than can be done on texts.
You can create a `nltk.Text` object from the tokens of the text, without sentence segmentation. This enables you to compute statistics using NLTK functions. [Chapter 1 of the NLTK book](http://www.nltk.org/book/ch01.html) provides examples of operations than can be done on texts.
NLTK Texts can in fact store one of the following text formats:
NLTK Texts can in fact store one of the following text formats:
1. a string;
1. a string;
2. the list of all words (list of strings);
2. the list of all words (list of strings);
3. the list of all sentences (list of lists of strings).
3. the list of all sentences (list of lists of strings).
However, only option (2) allows the correct use of counting methods for NLTK Texts. Note that `nltk.word_tokenize()` and `nltk.sent_tokenize()` only apply to strings, not to `ntlk.Text` objects, even if they store a string.
However, only option (2) allows the correct use of counting methods for NLTK Texts. Note that `nltk.word_tokenize()` and `nltk.sent_tokenize()` only apply to strings, not to `ntlk.Text` objects, even if they store a string.
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<fontcolor='green'>**Question**: Create a `nltk.Text` object from the tokenized version of your text (without sentence segmentation).</font>
<fontcolor='green'>**Question**: Create a `nltk.Text` object from the tokenized version of your text (without sentence segmentation).</font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Please write your Python code in this cell and execute it.
# Please write your Python code in this cell and execute it.
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
NLTK can compute word frequencies for a given text, yielding a new object called a frequency distribution (`FreqDist`): see [Sec. 3.1 of Ch. 1 of the NLTK book](http://www.nltk.org/book/ch01.html#frequency-distributions). Using such an object, we can get the most frequent words.
NLTK can compute word frequencies for a given text, yielding a new object called a frequency distribution (`FreqDist`): see [Sec. 3.1 of Ch. 1 of the NLTK book](http://www.nltk.org/book/ch01.html#frequency-distributions). Using such an object, we can get the most frequent words.
<fontcolor='green'>**Question**: Construct the frequency distribution of your text and use the `most_common` method of the object to display words longer than 3 characters among the 50 most frequent words.</font>
<fontcolor='green'>**Question**: Construct the frequency distribution of your text and use the `most_common` method of the object to display words longer than 3 characters among the 50 most frequent words.</font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Please write your Python code in this cell and execute it.
# Please write your Python code in this cell and execute it.
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<fontcolor='green'>**Question**: Display the cumulative frequency plot of the 50 most frequent words of your text, for instance using examples from [Sec. 3.1 of Ch. 1 of the NLTK book](http://www.nltk.org/book/ch01.html#frequency-distributions). You can either use the plotting functions from NLTK, or create two lists called `x_values` and `y_values`) and generate a plot with `plt.plot(x_values, y_values)`.</font>
<fontcolor='green'>**Question**: Display the cumulative frequency plot of the 50 most frequent words of your text, for instance using examples from [Sec. 3.1 of Ch. 1 of the NLTK book](http://www.nltk.org/book/ch01.html#frequency-distributions). You can either use the plotting functions from NLTK, or create two lists called `x_values` and `y_values`) and generate a plot with `plt.plot(x_values, y_values)`.</font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Before using matplotlib to display graphs inline, you must execute
# Before using matplotlib to display graphs inline, you must execute
# the following two lines (assuming you already installed the library).
# the following two lines (assuming you already installed the library).
importmatplotlib.pyplotasplt
importmatplotlib.pyplotasplt
%matplotlibinline
%matplotlibinline
```
```
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Please write your Python code in this cell and execute it.
# Please write your Python code in this cell and execute it.
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
Zipf's law was originally formulated in terms of quantitative linguistics, stating that given some corpus of natural language utterances, the frequency $f$ of any word is inversely proportional to its rank $k$ in the frequency table, i.e. $f ∝ 1/k$ (note: in the classic version of Zipf's law, the exponent s is 1). Alternatively, this can be stated and re-cast as follows
Zipf's law was originally formulated in terms of quantitative linguistics, stating that given some corpus of natural language utterances, the frequency $f$ of any word is inversely proportional to its rank $k$ in the frequency table, i.e. $f ∝ 1/k$ (note: in the classic version of Zipf's law, the exponent s is 1). Alternatively, this can be stated and re-cast as follows
$f \cdot k ∝ const$
$f \cdot k ∝ const$
$log(f \cdot k) ∝ const$ or
$log(f \cdot k) ∝ const$ or
$log(f) ∝ const - log(k)$
$log(f) ∝ const - log(k)$
which represent a linear relation between frequency $f$ and rank $k$ on a log-log scale.
which represent a linear relation between frequency $f$ and rank $k$ on a log-log scale.
You can use this relation to ask questions like:
You can use this relation to ask questions like:
- What is the probability of encountering the most common word or the 10th most common word in a corpus with 100,000 words?
- What is the probability of encountering the most common word or the 10th most common word in a corpus with 100,000 words?
Knowledge of the Zipf distribution has been used to build better neural language models (see e.g.,
Knowledge of the Zipf distribution has been used to build better neural language models (see e.g.,
<fontcolor='green'>**Question**: Generate a list of the number of occurrences of each word type, in decreasing order, from the `FreqDist` object. Plot for the first 100 ranks the number of occurrences on the *y * axis and the rank of each value (1, 2, 3, ..., 100) on the *x * axis, using a **log-log scale**. Add the plot of the function $y = a/x^b$, trying to set *a * and *b * so that the two lines are as close as possible (by trial and error, not using a formal method). The behavior is in fact predicted by [Zipf's law](https://en.wikipedia.org/wiki/Zipf%27s_law). </font>
<fontcolor='green'>**Question**: Generate a list of the number of occurrences of each word type, in decreasing order, from the `FreqDist` object. Plot for the first 100 ranks the number of occurrences on the *y * axis and the rank of each value (1, 2, 3, ..., 100) on the *x * axis, using a **log-log scale**. Add the plot of the function $y = a/x^b$, trying to set *a * and *b * so that the two lines are as close as possible (by trial and error, not using a formal method). The behavior is in fact predicted by [Zipf's law](https://en.wikipedia.org/wiki/Zipf%27s_law). </font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Please write your Python code in this cell and execute it.
# Please write your Python code in this cell and execute it.
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## Processing markup with Beautiful Soup
## Processing markup with Beautiful Soup
To extract text from an HTML or XML file, you can use the `BeautifulSoup` Python package. Some examples are found in [Chapter 3 of the NLTK book](http://www.nltk.org/book/ch03.html). The simplest way is using `get_text`, but this will also get text from tables, image captions, etc. You can also check the
To extract text from an HTML or XML file, you can use the `BeautifulSoup` Python package. Some examples are found in [Chapter 3 of the NLTK book](http://www.nltk.org/book/ch03.html). The simplest way is using `get_text`, but this will also get text from tables, image captions, etc. You can also check the
[BeautifulSoup documentation](https://beautiful-soup-4.readthedocs.io/en/latest/) or tutorials [here](https://matix.io/extract-text-from-webpage-using-beautifulsoup-and-python/) or [here](https://www.pluralsight.com/guides/extracting-data-html-beautifulsoup).
[BeautifulSoup documentation](https://beautiful-soup-4.readthedocs.io/en/latest/) or tutorials [here](https://matix.io/extract-text-from-webpage-using-beautifulsoup-and-python/) or [here](https://www.pluralsight.com/guides/extracting-data-html-beautifulsoup).
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
fromurllibimportrequest# if needed
fromurllibimportrequest# if needed
frombs4importBeautifulSoup
frombs4importBeautifulSoup
```
```
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Sample code : extracted text is the "raw2" string.
# Sample code : extracted text is the "raw2" string.
url2="https://en.wikipedia.org/wiki/Switzerland"
url2="https://en.wikipedia.org/wiki/Switzerland"
response2=request.urlopen(url2)
response2=request.urlopen(url2)
html2=response2.read().decode('utf8')
html2=response2.read().decode('utf8')
raw2=BeautifulSoup(html2).get_text()
raw2=BeautifulSoup(html2).get_text()
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<fontcolor='green'>**Question**: What are the number of word tokens, types and TTR of the Wikipedia page at `url2`? </font>
<fontcolor='green'>**Question**: What are the number of word tokens, types and TTR of the Wikipedia page at `url2`? </font>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Please write your Python code in this cell and execute it.
# Please write your Python code in this cell and execute it.
# BEGIN_REMOVE
# BEGIN_REMOVE
words2=nltk.word_tokenize(raw2)
words2=nltk.word_tokenize(raw2)
vocabulary2=set(words2)
vocabulary2=set(words2)
print("Number of tokens: ",len(words2))
print("Number of tokens: ",len(words2))
print("Number of types: ",len(vocabulary2))
print("Number of types: ",len(vocabulary2))
print("Type to token ratio: ",len(vocabulary2)/len(words2))
print("Type to token ratio: ",len(vocabulary2)/len(words2))
# END_REMOVE
# END_REMOVE
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## More advanced pre-processing options
## More advanced pre-processing options
Please read and experiment with the notebook `MSE_AnTeDe_TextPreprocessingDemo.ipynb`, where you will find more advanced pre-processing options:
Please read and experiment with the notebook `MSE_AnTeDe_TextPreprocessingDemo.ipynb`, where you will find more advanced pre-processing options:
1. a set of NLTK functions for lemmatization and stemming;
1. a set of NLTK functions for lemmatization and stemming;
2. the in-house class `TextPreprocessing`;
2. the in-house class `TextPreprocessing`;
3. gensim's `preprocess_documents` function.
3. gensim's `preprocess_documents` function.
The underlying definitions and methods of some of them will be presented in the following lessons of AnTeDe, and you will be able to use them in future lab work.
The underlying definitions and methods of some of them will be presented in the following lessons of AnTeDe, and you will be able to use them in future lab work.
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## End of AnTeDe Lab 1
## End of AnTeDe Lab 1
Please rerun anew all cells via "Runtime > Restart and run all", save this completed notebook, compress it to a *zip* file, and upload it to Moodle.
Please rerun anew all cells via "Runtime > Restart and run all", save this completed notebook, compress it to a *zip* file, and upload it to Moodle.