Conference paper presented on the 28th European Signal Processing Conference (EUSIPCO) in which we propose an adaptive sampling method for the diffusion networks.

## New paper: A Low-Cost Algorithm for Adaptive Sampling and Censoring in Diffusion Networks

This paper summarizes the results obtained by Daniel G. Tiglea during the period he was working to obtain the M.S. Degree.

## 7 Python Code Examples for Everyday Use

On September 27, 2020 the article *7 Python Code Examples for Everyday Use* was published on Go Skills. In the following, you’ll find the summary and the link to the code on Github.

## Generative Adversarial Networks: Build Your First Models

*Generative Adversarial Networks: Build Your First Models*was published on Real Python. In the following, you’ll find the summary and the link to the code on Github.

## How To Set Up a PageKite Front-End Server on Debian 9

On October 25, 2019 the article *How To Set Up a PageKite Front-End Server on Debian 9* was published on Digital Ocean. In the following, you’ll find the summary and the link to the article on the Digital Ocean website.

## Arduino With Python: How to Get Started

On October 21, 2019 the article *Arduino With Python: How to Get Started* was published on Real Python. In the following, you’ll find the summary and the link to the article on the Real Python website.

## A Brief Introduction to GANs – SciPy Meetup Talk

On the 15th of August, I presented a talk on SciPy Meetup – Coders Hub Powered by Giant Steps. Here are the summary of the presentation, slides and code (in portuguese).

## New paper: An Adaptive Sampling Technique for Graph Diffusion LMS Algorithm

Conference paper presented on the 27th European Signal Processing Conference (EUSIPCO) in which we propose an adaptive sampling method for the diffusion algorithm for adaptively learning from streaming graphs signals.

## Setting Up Python for Machine Learning on Windows

*This Post Was Originally Published on Real Python on Oct 31st, 2018 by Renato Candido.*

Python has been largely used for numerical and scientific applications in the last years. However, to perform numerical computations in an efficient manner, Python relies on external libraries, sometimes implemented in other languages, such as the NumPy library, which is partly implemented using the Fortran language.

Due to these dependencies, sometimes it isn’t trivial to set up an environment for numerical computations, linking all the necessary libraries. It’s common for people to struggle to get things working in workshops involving the use of Python for machine learning, especially when they are using an operating system that lacks a package management system, such as Windows.

**In this article, you’ll:**

- Walk through the details for setting up a Python environment for numerical computations on a Windows operating system
- Be introduced to Anaconda, a Python distribution proposed to circumvent these setup problems
- See how to install the distribution on a Windows machine and use its tools to manage packages and environments
- Use the installed Python stack to build a neural network and train it to solve a classic classification problem

## Pure Python vs NumPy vs TensorFlow Performance Comparison

*This Post Was Originally Published on Real Python on May 7th, 2018 by Renato Candido.*

Python has a design philosophy that stresses allowing programmers to express concepts readably and in fewer lines of code. This philosophy makes the language suitable for a diverse set of use cases: simple scripts for web, large web applications (like YouTube), scripting language for other platforms (like Blender and Autodesk’s Maya), and scientific applications in several areas, such as astronomy, meteorology, physics, and data science.

It is technically possible to implement scalar and matrix calculations using Python lists. However, this can be unwieldy, and performance is poor when compared to languages suited for numerical computation, such as MATLAB or Fortran, or even some general purpose languages, such as C or C++.

To circumvent this deficiency, several libraries have emerged that maintain Python’s ease of use while lending the ability to perform numerical calculations in an efficient manner. Two such libraries worth mentioning are *NumPy* (one of the pioneer libraries to bring efficient numerical computation to Python) and *TensorFlow* (a more recently rolled-out library focused more on deep learning algorithms).

- NumPy provides support for large multidimensional arrays and matrices along with a collection of mathematical functions to operate on these elements. The project relies on well-known packages implemented in other languages (like Fortran) to perform efficient computations, bringing the user both the expressiveness of Python and a performance similar to MATLAB or Fortran.
- TensorFlow is an open-source library for numerical computation originally developed by researchers and engineers working at the Google Brain team. The main focus of the library is to provide an easy-to-use API to implement practical machine learning algorithms and deploy them to run on CPUs, GPUs, or a cluster.

**But how do these schemes compare? How much faster does the application run when implemented with NumPy instead of pure Python? What about TensorFlow?** The purpose of this article is to begin to explore the improvements you can achieve by using these libraries.

To compare the performance of the three approaches, you’ll build a basic regression with native Python, NumPy, and TensorFlow.