SitePoint – Learn HTML, CSS, JavaScript, PHP, Ruby & Responsive Design


The Real Future of Remote Work is Asynchronous

I’ve been working remotely for over a decade – well before the days of tools like Slack or Zoom. In some ways, it was easier back then: you worked from wherever you were and had the space to manage your workload however you wanted. If you desired to go hardcore creative mode at night, sleep in, then leisurely read fiction over brunch, you could. Now, in the age of the “green dot” or “presence prison,” as Jason Fried calls it, working remotely can be more suffocating than in-person work. The freedom that we worked hard to create — escaping the 9-to-5 — has now turned into constant monitoring, with the expectation that we are on, accessible, productive, and communicative 24/7. I see this in job positions for remote roles. Companies frequently champion remote, proudly advertising their flexible cultures to only then list that candidates must be based within 60 minutes of Pacific Time Zone, that the hours are set, and standup is at 8:30am daily. One of the benefits of remote work is that it brings the world closer together and creates a level-playing field for the world’s best talent. Whether you were in Bengaluru or Berlin, you could still work with a VC-backed, cash-rich startup in San Francisco earning a solid hourly rate. If remote slowly turns into a way of working in real-time with frequent face-time, we will see less of this. And let’s not forget trust: the crux of remote culture. Companies create tools that automatically record your screen at intervals to show management or clients you’re delivering. I founded a freelance marketplace called CloudPeeps and not recording your screen, as Upwork does, is one way we attract a different caliber of indie professional. You can have more freedom in an office. From my beige cubicle at one of my first roles, I witnessed a colleague plan a wedding over the course of many months, including numerous calls to vendors and 20 tabs open for research. Most of the team was none the wiser – this wouldn’t be the case with remote today. At the heart of this friction is the demand for real-time, synchronous communication. If we champion asynchronous as the heart of remote, what does the future of remote look like? The post The Real Future of Remote Work is Asynchronous appeared first on SitePoint.

7 Ways Developers Can Contribute to Climate Action

Whether you’ve just started out as a software engineer or you’ve been at it for decades, you too can play a role in helping to positively impact climate. When people first consider this, they tend to think about the impact writing efficient code will have. Of course, you should always write efficient, elegant code. But unless the code you’re creating is going to be used by millions of people, it may not be where you can have the biggest impact from a climate perspective. (Code being used by millions or billions of people is probably highly optimized anyway!) In this article, we'll look at seven other ways you can help. Choose Where You Spend Your Career Being an engineer means you have one of the most sought after, transferable occupations on the planet. In virtually any city in the world, you'll be in demand and probably well paid, so you have plenty of options. Choosing to work in a place that's at the intersection of your cares and your code is one of the easiest ways you can have an impact. Engineering is also one of the few careers where the job can be done remotely, and there's a growing list of companies focused on hiring people to work remotely. Find Time to Contribute to Open-source Projects Open source enables us all to benefit from a collective effort and shared knowledge, so the benefits are already clear. But what you may not be aware of is the mass of open-source projects specifically targeted at helping the environment. Open source also powers some of the biggest sites on the Internet, so you may also find your code being used at that billions-of-people scale mentioned earlier. While it's easy to find projects you can work on via a quick Google search, this article highlights a few. Apply Your Skills to Non-profits A lot of the work being done to combat or deal with the impacts of climate change are being done by the non-profit sector, and the one thing the non-profit sector always has is a lack of capital and a lack of talent. When people think of volunteering, they tend to think of painting a shed or handing out food at a shelter, but you can potentially create a bigger and more lasting impact by applying your skills and experience. I worked with a non-profit to help design, set up and configure Salesforce's (free for nonprofits) service, so they could run more efficiently and at a higher scale. Hour for hour this was the best way I could help them to have a bigger impact. Influence the Way the Product is Designed With the rise of agile, squads (pioneered by Spotify) and cross-functional teams generally, the dynamic within the team has changed. Engineers now have a seat at the table to drive what the software does, how it works and even the end-customer problems it solves. This means as an engineer you can either walk into the room and be told what is being built or you can stand up and help drive that outcome, by considering the climate change impact of a design decision. A great example of this might be to set default shipping options to a lower impact option in an eCommerce site, or Google maps defaulting to a walking option vs a driving option. The post 7 Ways Developers Can Contribute to Climate Action appeared first on SitePoint.

How to Divert Traffic Using IP2Location in a Next.js Website

This article was created in partnership with IP2Location. Thank you for supporting the partners who make SitePoint possible. In a world where online commerce has become the norm, we need to build websites that are faster, user friendly and more secure than ever. In this article, you’ll learn how to set up a Node.js powered website that’s capable of directing traffic to relevant landing pages based on a visitor's country. You'll also learn how to block anonymous traffic (e.g. Tor) in order to eliminate risks coming from such networks. In order to implement these features, we'll be using the IP2Proxy web service provided by IP2Location, a Geo IP solutions provider. The web service is a REST API that accepts an IP address and responds with geolocation data in JSON format. Here are some of the fields that we'll receive: countryName cityName isProxy proxyType etc. We'll use Next.js to build a website containing the following landing pages: Home Page: API fetching and redirection will trigger from this page Landing Page: supported countries will see the product page in their local currency Unavailable Page: other countries will see this page with an option to join a waiting list Abuse Page: visitors using Tor networks will be taken to this page Now that you're aware of the project plan, let's see what you need to get started. Prerequisites On your machine, I would highly recommend the following: Latest LTS version of Node.js (v12) Yarn An older version of Node.js will do, but the most recent LTS (long-term support) version contains performance and debugging improvements in the area of async code, which we'll be dealing with. Yarn isn't necessary, but you'll benefit from its faster performance if you use it. I’m also going to assume you have a good foundation in: React React Hooks As mentioned earlier, we'll be using Next.js to build our website. If you're new to it, you can follow their official interactive tutorial to quickly get up to speed. IP2Location + Next.js Project Walkthrough Project Setup To set up the project, simply launch the terminal and navigate to your workspace. Execute the following command: npx create-next-app Feel free to give your app any name. I've called mine next-ip2location-example. After installation is complete, navigate to the project's root and execute yarn dev. This will launch the Node.js dev server. If you open your browser and navigate to localhost:3000, you should see a page with the header “Welcome to Next.js”. This should confirm that we have a working app that runs without errors. Stop the app and install the following dependencies: yarn add yarn add next-compose-plugins dotenv-load next-env @zeit/next-css bulma isomorphic-unfetch We'll be using Bulma CSS framework to add out-of-the-box styling for our site. Since we'll be connecting to an API service, we'll set up an .env file to store our API key. Do note that this file should not be stored in a repository. Next create the file next.config.js. at the root of the project and add the following code: const withPlugins = require('next-compose-plugins') const css = require('@zeit/next-css') const nextEnv = require('next-env') const dotenvLoad = require('dotenv-load') dotenvLoad() module.exports = withPlugins([ nextEnv(), [css] ]) The above configuration allows our application to read the .env file and load values. Do note that the keys will need to have the prefix NEXT_SERVER_ in order to be loaded in the server environment. Visit the next-env package page for more information. We'll set the API key in the next section. The above configuration also gives our Next.js app the capability to pre-process CSS code via the zeit/next-css package. This will allow us to use Bulma CSS framework in our application. Do note we'll need import Bulma CSS code into our Next.js application. I'll soon show you where to do this. Obtaining API Key for the I2Proxy Web Service As mentioned earlier, we'll need to convert a visitor's IP address into information we can use to redirect or block traffic. Simply head to the following link and sign up for a free trial key: IP2Proxy Detection Web Service Once you sign up, you'll receive the free API key via email. Create an .env file and place it at the root of your project folder. Copy your API key to the file as follows: NEXT_SERVER_IP2PROXY_API=<place API key here> This free key will give you 1,000 free credits. At a minimum, we'll need the following fields for our application to function: countryName proxyType If you look at the pricing section on the IP2Proxy page, you'll note that the PX2 package will give us the required response. This means each query will costs us two credits. Below is a sample of how the URL should be constructed: http://api.ip2proxy.com/?ip= You can also submit the URL query without the IP. The service will use the IP address of the machine that sent the request. We can also use the PX8 package to get all the available fields such as isp and domain in the top-most package of the IP2Proxy Detection Web Service. http://api.ip2proxy.com/?key=demo&package=PX8 In the next section, we'll build a simple state management system for storing the proxy data which will be shared among all site pages. Building Context API in Next.js Create the file context/proxy-context and insert the following code: import React, { useState, useEffect, useRef, createContext } from 'react' export const ProxyContext = createContext() export const ProxyContextProvider = (props) => { const initialState = { ipAddress: '', countryName: 'Nowhere', isProxy: false, proxyType: '' } // Declare shareable proxy state const [proxy, setProxy] = useState(initialState) const prev = useRef() // Read and Write Proxy State to Local Storage useEffect(() => { if (proxy.countryName == 'Nowhere') { const localState = JSON.parse(localStorage.getItem('ip2proxy')) if (localState) { console.info('reading local storage') prev.current = localState.ipAddress setProxy(localState) } } else if (prev.current !== proxy.ipAddress) { console.info('writing local storage') localStorage.setItem('ip2proxy', JSON.stringify(proxy)) } }, [proxy]) return( <ProxyContext.Provider value={[ipLocation, setProxy]}> {props.children} </ProxyContext.Provider> ) } Basically, we’re declaring a sharable state called proxy that will store data retrieved from the IP2Proxy web service. The API fetch query will be implemented in pages/index.js. The information will be used to redirect visitors to the relevant pages. If the visitor tries to refresh the page, the saved state will be lost. To prevent this from happening, we're going to use the useEffect() hook to persist state in the browser's local storage. When a user refreshes a particular landing page, the proxy state will be retrieved from the local storage, so there's no need to perform the query again. Here's a quick sneak peek of Chrome's local storage in action: Tip: In case you run into problems further down this tutorial, clearing local storage can help resolve some issues. Displaying Proxy Information Create the file components/proxy-view.js and add the following code: import React, { useContext } from 'react' import { ProxyContext } from '../context/proxy-context' const style = { padding: 12 } const ProxyView = () => { const [proxy] = useContext(ProxyContext) const { ipAddress, countryName, isProxy, proxyType } = proxy return ( <div className="box center" style={style}> <div className="content"> <ul> <li>IP Address : {ipAddress} </li> <li>Country : {countryName} </li> <li>Proxy : {isProxy} </li> <li>Proxy Type: {proxyType} </li> </ul> </div> </div> ) } export default ProxyView This is simply a display component that we'll place at the end of each page. We're only creating this to confirm that our fetch logic and application's state is working as expected. You should note that the line const [proxy] = useContext(ProxyContext) won't run until we've declared our Context Provider at the root of our application. Let's do that now in the next section. Implementing Context API Provider in Next.js App Create the file pages/_app.js and add the following code: import React from 'react' import App from 'next/app' import 'bulma/css/bulma.css' import { ProxyContextProvider } from '../context/proxy-context' export default class MyApp extends App { render() { const { Component, pageProps } = this.props return ( <ProxyContextProvider> <Component {...pageProps} /> </ProxyContextProvider> ) } } The _app.js file is the root component of our Next.js application where we can share global state with the rest of the site pages and child components. Note that this is also where we're importing CSS for the Bulma framework we installed earlier. With that set up, let's now build a layout that we'll use for all our site pages. The post How to Divert Traffic Using IP2Location in a Next.js Website appeared first on SitePoint.

10 Zsh Tips & Tricks: Configuration, Customization & Usage

As web developers, the command line is becoming an ever more important part of our workflow. We use it to install packages from npm, to test API endpoints, to push commits to GitHub, and lots more besides. My shell of choice is zsh. It is a highly customizable Unix shell, that packs some very powerful features such as killer tab completion, clever history, remote file expansion, and much more. In this article I'll show you how to install zsh, then offer ten tips and tricks to make you more productive when working with it. This is a beginner-level guide which can be followed by anybody (even Windows users, thanks to Windows Subsystem for Linux). However, in light of Apple's announcement that zsh is now the standard shell on macOS Catalina, mac users might find it especially helpful. Lets get started. Installation I don't want to offer in-depth installation instructions for each operating system, rather some general guidelines instead. If you get stuck installing zsh, there is plenty of help available online. At the time of writing the current zsh version is 5.7.1. macOS Most versions of macOS ship with zsh pre-installed. You can check if this is the case and if so, which version you are running using the command: zsh --version. If the version is 4.3.9 or higher, you should be good to go (we'll need at least this version to install Oh My Zsh later on). If not, you can follow this guide to install a more recent version of zsh using homebrew. Once installed, you can set zsh as the default shell using: chsh -s $(which zsh). After issuing this command, you'll need to log out, then log back in again for the changes to take effect. If at any point you decide you don't like zsh, you can revert to Bash using: chsh -s $(which bash). Linux On Ubuntu-based distros, you can install zsh using: sudo apt-get install zsh. Once the installation completes, you can check the version using zsh --version, then make zsh your default shell using chsh -s $(which zsh). You'll need to log out, then log back in for the changes to take effect. As with macOS, you can revert back to Bash using: chsh -s $(which bash). If you are running a non-Ubuntu based distro, then check out the instructions for other distros. Windows Unfortunately, this is where things start to get a little complicated. Zsh is a Unix shell and for it to work on Windows, you'll need to activate Windows Subsystem for Linux (WSL), an environment in Windows 10 for running Linux binaries. There are various tutorials online explaining how to get up and running with zsh in Window 10s. I found these two to be up-to-date and easy to follow: How to Install and Use the Linux Bash Shell on Windows 10 - follow this one first to install WSL How to Use Zsh (or Another Shell) in Windows 10 - follow this one second to install zsh Note that it is also possible to get zsh running with Cygwin. Here are instructions for doing that. First Run When you first open zsh, you'll be greeted by the following menu. The post 10 Zsh Tips & Tricks: Configuration, Customization & Usage appeared first on SitePoint.

Building a Habit Tracker with Prisma, Chakra UI, and React

In June 2019, Prisma 2 Preview was released. Prisma 1 changed the way we interact with databases. We could access databases through plain JavaScript methods and objects without having to write the query in the database language itself. Prisma 1 acted as an abstraction in front of the database so it was easier to make CRUD (create, read, update and delete) applications. Prisma 1 architecture looked like this: Notice that there’s an additional Prisma server required for the back end to access the database. The latest version doesn’t require an additional server. It's called The Prisma Framework (formerly known as Prisma 2) which is a complete rewrite of Prisma. The original Prisma was written in Scala, so it had to be run through JVM and needed an additional server to run. It also had memory issues. The Prisma Framework is written in Rust so the memory footprint is low. Also, the additional server required while using Prisma 1 is now bundled with the back end, so you can use it just like a library. The Prisma Framework consists of three standalone tools: Photon: a type-safe and auto-generated database client ("ORM replacement") Lift: a declarative migration system with custom workflows Studio: a database IDE that provides an Admin UI to support various database workflows. Photon is a type-safe database client that replaces traditional ORMs, and Lift allows us to create data models declaratively and perform database migrations. Studio allows us to perform database operations through a beautiful Admin UI. Why use Prisma? Prisma removes the complexity of writing complex database queries and simplifies database access in the application. By using Prisma, you can change the underlying databases without having to change each and every query. It just works. Currently, it only supports mySQL, SQLite and PostgreSQL. Prisma provides type-safe database access provided by an auto-generated Prisma client. It has a simple and powerful API for working with relational data and transactions. It allows visual data management with Prisma Studio. Providing end-to-end type-safety means developers can have confidence in their code, thanks to static analysis and compile-time error checks. The developer experience increases drastically when having clearly defined data types. Type definitions are the foundation for IDE features — like intelligent auto-completion or jump-to-definition. Prisma unifies access to multiple databases at once (coming soon) and therefore drastically reduces complexity in cross-database workflows (coming soon). It provides automatic database migrations (optional) through Lift, based on a declarative datamodel expressed using GraphQL's schema definition language (SDL). Prerequisites For this tutorial, you need a basic knowledge of React. You also need to understand React Hooks. Since this tutorial is primarily focused on Prisma, it’s assumed that you already have a working knowledge of React and its basic concepts. If you don’t have a working knowledge of the above content, don't worry. There are tons of tutorials available that will prepare you for following this post. Throughout the course of this tutorial, we’ll be using yarn. If you don’t have yarn already installed, install it from here. To make sure we’re on the same page, these are the versions used in this tutorial: Node v12.11.1 npm v6.11.3 npx v6.11.3 yarn v1.19.1 prisma2 v2.0.0-preview016.2 react v16.11.0 Folder Structure Our folder structure will be as follows: streaks-app/ client/ server/ The client/ folder will be bootstrapped from create-react-app while the server/ folder will be bootstrapped from prisma2 CLI. So you just need to create a root folder called streaks-app/ and the subfolders will be generated while scaffolding it with the respective CLIs. Go ahead and create the streaks-app/ folder and cd into it as follows: $ mkdir streaks-app && cd $_ The Back End (Server Side) Bootstrap a new Prisma 2 project You can bootstrap a new Prisma 2 project by using the npx command as follows: $ npx prisma2 init server Alternatively, you can install prisma2 CLI globally and run the init command. The do the following: $ yarn global add prisma2 // or npm install --global prisma2 $ prisma2 init server Run the interactive prisma2 init flow & select boilerplate Select the following in the interactive prompts: Select Starter Kit Select JavaScript Select GraphQL API Select SQLite Once terminated, the init command will have created an initial project setup in the server/ folder. Now open the schema.prisma file and replace it with the following: generator photon { provider = "photonjs" } datasource db { provider = "sqlite" url = "file:dev.db" } model Habit { id String @default(cuid()) @id name String @unique streak Int } schema.prisma contains the data model as well as the configuration options. Here, we specify that we want to connect to the SQLite datasource called dev.db as well as target code generators like photonjs generator. Then we define the data model Habit, which consists of id, name and streak. id is a primary key of type String with a default value of cuid(). name is of type String, but with a constraint that it must be unique. streak is of type Int. The seed.js file should look like this: const { Photon } = require('@generated/photon') const photon = new Photon() async function main() { const workout = await photon.habits.create({ data: { name: 'Workout', streak: 49, }, }) const running = await photon.habits.create({ data: { name: 'Running', streak: 245, }, }) const cycling = await photon.habits.create({ data: { name: 'Cycling', streak: 77, }, }) const meditation = await photon.habits.create({ data: { name: 'Meditation', streak: 60, }, }) console.log({ workout, running, cycling, meditation, }) } main() .catch(e => console.error(e)) .finally(async () => { await photon.disconnect() }) This file creates all kinds of new habits and adds it to the SQLite database. Now go inside the src/index.js file and remove its contents. We'll start adding content from scratch. First go ahead and import the necessary packages and declare some constants: const { GraphQLServer } = require('graphql-yoga') const { makeSchema, objectType, queryType, mutationType, idArg, stringArg, } = require('nexus') const { Photon } = require('@generated/photon') const { nexusPrismaPlugin } = require('nexus-prisma') Now let’s declare our Habit model just below it: const Habit = objectType({ name: 'Habit', definition(t) { t.model.id() t.model.name() t.model.streak() }, }) We make use of objectType from the nexus package to declare Habit. The name parameter should be the same as defined in the schema.prisma file. The definition function lets you expose a particular set of fields wherever Habit is referenced. Here, we expose id, name and streak field. If we expose only the id and name fields, only those two will get exposed wherever Habit is referenced. Below that, paste the Query constant: const Query = queryType({ definition(t) { t.crud.habit() t.crud.habits() // t.list.field('habits', { // type: 'Habit', // resolve: (_, _args, ctx) => { // return ctx.photon.habits.findMany() // }, // }) }, }) We make use of queryType from the nexus package to declare Query. The Photon generator generates an API that exposes CRUD functions on the Habit model. This is what allows us to expose t.crud.habit() and t.crud.habits() method. t.crud.habit() allows us to query any individual habit by its id or by its name. t.crud.habits() simply returns all the habits. Alternatively, t.crud.habits() can also be written as: t.list.field('habits', { type: 'Habit', resolve: (_, _args, ctx) => { return ctx.photon.habits.findMany() }, }) Both the above code and t.crud.habits() will give the same results. In the above code, we make a field named habits. The return type is Habit. We then call ctx.photon.habits.findMany() to get all the habits from our SQLite database. Note that the name of the habits property is auto-generated using the pluralize package. It's therefore recommended practice to name our models singular — that is, Habit and not Habits. We use the findMany method on habits, which returns a list of objects. We find all the habits as we have mentioned no condition inside of findMany. You can learn more about how to add conditions inside of findMany here. Below Query, paste Mutation as follows: const Mutation = mutationType({ definition(t) { t.crud.createOneHabit({ alias: 'createHabit' }) t.crud.deleteOneHabit({ alias: 'deleteHabit' }) t.field('incrementStreak', { type: 'Habit', args: { name: stringArg(), }, resolve: async (_, { name }, ctx) => { const habit = await ctx.photon.habits.findOne({ where: { name, }, }) return ctx.photon.habits.update({ data: { streak: habit.streak + 1, }, where: { name, }, }) }, }) }, }) Mutation uses mutationType from the nexus package. The CRUD API here exposes createOneHabit and deleteOneHabit. createOneHabit, as the name suggests, creates a habit whereas deleteOneHabit deletes a habit. createOneHabit is aliased as createHabit, so while calling the mutation we call createHabit rather than calling createOneHabit. Similarly, we call deleteHabit instead of deleteOneHabit. Finally, we create a field named incrementStreak, which increments the streak of a habit. The return type is Habit. It takes an argument name as specified in the args field of type String. This argument is received in the resolve function as the second argument. We find the habit by calling ctx.photon.habits.findOne() while passing in the name parameter in the where clause. We need this to get our current streak. Then finally we update the habit by incrementing the streak by 1. Below Mutation, paste the following: const photon = new Photon() new GraphQLServer({ schema: makeSchema({ types: [Query, Mutation, Habit], plugins: [nexusPrismaPlugin()], }), context: { photon }, }).start(() => console.log( `🚀 Server ready at: http://localhost:4000\n⭐️ See sample queries: http://pris.ly/e/js/graphql#5-using-the-graphql-api`, ), ) module.exports = { Habit } We use the makeSchema method from the nexus package to combine our model Habit, and add Query and Mutation to the types array. We also add nexusPrismaPlugin to our plugins array. Finally, we start our server at localhost:4000. Port 4000 is the default port for graphql-yoga. You can change the port as suggested here. Let's start the server now. But first, we need to make sure our latest schema changes are written to the node_modules/@generated/photon directory. This happens when you run prisma2 generate. If you haven't installed prisma2 globally, you'll have to replace prisma2 generate with ./node_modules/.bin/prisma2 generate. Then we need to migrate our database to create tables. The post Building a Habit Tracker with Prisma, Chakra UI, and React appeared first on SitePoint.

Black Friday 2019 for Designers and Developers

This article was created in partnership with Mekanism. Thank you for supporting the partners who make SitePoint possible. Black Friday is one of the best opportunities of the year to get all kinds of new stuff, including digital web tools and services. Some companies are offering huge discounts to heavily increase their sales, while others already have excellent offers for their customers and partners. In this article, you’ll find free and premium web tools and services, and also some of the best Black Friday WordPress deals. We included website builders, UI Kits, Admins themes, WordPress themes, effective logo and brand identity creators, and much more. There’s a web tool or service for everyone in this showcase of 38 excellent solutions. Let’s start. 1. Free and Premium Bootstrap 4 Admin Themes and UI Kits DashboardPack is one of the main suppliers of free and premium Bootstrap 4 admin themes and UI kits, being used by tens of thousands of people with great success. Here you’ll find free and premium themes, made with great attention to detail — HTML5 themes, React themes, Angular themes, and Vue themes. On the DashboardPack website there’s a dedicated section of Freebies. Here there are four gorgeous dashboard themes (HTML, Angular, Vue, and React) that you can see as a live demo and use for free. Between November 29 and December 3, you have 50% discount for all templates and all license types (Personal, Developer, and Lifetime). Use this coupon code: MADBF50. 2. Total Theme Total Theme is a super powerful and complete WordPress theme that is flexible, easy to use and customize. It has brilliant designs included, and other cool stuff. With over 38k happy users, Total Theme is a popular WordPress theme. It comes loaded with over 80 builder modules, over 40 premade demos that can be installed with 1-click, 500 styling options, and a friendly and lightning-fast interface. The premade demos cover niches like Business, One Page, Portfolio, Personal, Creative, Shop, Blog, Photography, and more. Total Theme will help you achieve pretty much any goal — from scratch using the included Visual Page Builder, or by editing a demo design. A limited-time 50% off Total Theme offer is valid from November 26 2019 (12pm AEDT) through December 3 2019 (8pm AEDT). Discount already applied. 3. Tailor Brands Imagine if your dream business idea had a name, a face, and branded documents that made it official. With Tailor Brands’ online logo maker and design tools, you can instantly turn that dream idea into a living, breathing company! Design a logo in 30 seconds, customize it to your liking, and put it on everything — from professional business cards to online presentations. Tailor Brand’s mission is to be the biggest branding agency powered by AI. It’s a huge goal but it is achievable, and they already have a top position on this ladder. Designing a logo with Tailor Brands is super simple and you don’t need any special skills or previous experience to get a top logo design. You write the logo name you like, add a tagline (optional step), indicate which industry is your logo is for, choose if you want an icon-, name- or initial-based logo, choose from left and right (you’ll get designs as examples), and the powerful AI will present you plenty of logo designs to choose from. It’s super simple and straightforward. Go ahead and design a logo with Tailor Brands. 4. Freelance Taxes Bonsai is the integrated suite of products used by the world’s best creative freelancers. With the latest addition of freelance taxes to the product lineup, Bonsai is more prepared than ever to help with everything your freelance business needs. Be prepared for tax season and spend just seconds getting an overview of what you owe in annual or quarterly taxes. Bonsai’s freelance tax software looks at your expenses, automatically categorizes them, and highlights which are deductible and to what percentage. All Bonsai products are deeply integrated with each other to ensure it can fit every work style. Other features you should know about include contracts, proposals, time-tracking, and invoicing. Start your free trial of Bonsai today and be ready for your freelance taxes ahead of time! 5. Codester Codester is a huge marketplace where web designers and developers can find thousands of premium scripts, codes, app templates, themes (of all kinds), plugins, graphics, and much more. Always check the Flash Sale section where hugely discounted items are being sold. 6. Mobile App Testing With over eight years of experience, this App and Browser Testing service is powerful, easy to use and provides you with a big number of features tailored to help you improve your product. Use TestingBot for automated web and app testing, for live web and app testing, for visual testing, and much more. Start a free, 14-day trial, no credit card required. 7. FunctionFox The leading choice for creative professionals, FunctionFox gives you simple yet powerful time-tracking and project-management tools that allow you to keep multiple projects on track, forecast workloads, reduce communication breakdowns and stay on top of deadlines through project scheduling, task-based assignments, internal communication tools and comprehensive reporting. Don't let deadlines and due dates slip past! Try a free demo today at FunctionFox. 8. Taskade: Simple Tasks, Notes, Chat Taskade is a unified workspace where you can chat, write, and get work done with your team. Edit projects in real time. Chat and video conference on the same page. Keep track of tasks across multiple teams and workspaces. Plan, manage, and visualize projects. And much more. With Taskade, you can build your own workspace templates. You can start from a blank page or you can choose between a Weekly Planner, Meeting Agenda, Project Board, Mindmap, and more (you'll find lots of templates to start with). Everything you need can be fully configured to be a perfect fit. 9. Live Chat Software AppyPie is a professional and super-easy-to-use Live Chat solution that will help you reach out to your clients and offer them real-time responsive and support through your website and mobile, using the platform live chat software. This is a brilliant way to quickly increase conversions, make more sales (you can answer questions from people that want to buy), and increase the level of happiness of your customers. (Whatever problem they may have, they know that you're there to help fast.) Request an invite to test the platform. 10. Mobirise Website Builder Mobirise is arguably the best website builder in 2019, which you can use to create fast, responsive, and Google-friendly websites in minutes, with zero coding, and only drag-and-drop. This brilliant builder is loaded with over 2,000 awesome website templates to start with, with eCommerce and Shopping Cart, sliders, galleries, forms, popups, icons, and much more. In this period there is a 94% discount, so take it. 11. Newsletter Templates MailMunch is a powerful drag-and-drop builder that's loaded with tons of beautiful, pre-designed newsletter templates, with advanced features like Template Blocks and a Media Library to make the workflow even smoother, and a lot more. There's no coding required to use MailMunch. Start boosting your conversions with MailMunch. 12. Astra Theme: Elementor Templates Elementor is the most powerful website builder on the market, being used by millions of people with great success. To get out of the crowd, you can supercharge Elementor with 100+ free and premium templates, by using this bundle. Free to use. 13. Schema Pro Creating a schema markup is no longer a task! With a simple click and select interface you can set up a markup in minutes. All the markup configurations you will set are automatically applied to all selected pages and posts. Get Schema Pro and outperform your competitors in search engines. 14. Rank Math SEO Rank Math is the most powerful and easy-to-use WordPress SEO plugin on the market, making your website rank higher in search engines in no time. After a quick installation and setup, Rank Math SEO does the whole the job with no supervision. The post Black Friday 2019 for Designers and Developers appeared first on SitePoint.

Delay, Sleep, Pause, & Wait in JavaScript

Many programming languages have a sleep function that will delay a program's execution for a given number of seconds. This functionality is absent from JavaScript, however, owing to its asynchronous nature. In this article, we'll look briefly at why this might be, then how we can implement a sleep function ourselves. Understanding JavaScript's Execution Model Before we get going, it's important to make sure we understand JavaScript's execution model correctly. Consider the following Ruby code: require 'net/http' require 'json' url = 'https://api.github.com/users/jameshibbard' uri = URI(url) response = JSON.parse(Net::HTTP.get(uri)) puts response['public_repos'] puts "Hello!" As one might expect, this code makes a request to the GitHub API to fetch my user data. It then parses the response, outputs the number of public repos attributed to my GitHub account and finally prints "Hello!" to the screen. Execution goes from top to bottom. Contrast that with the equivalent JavaScript version: fetch('https://api.github.com/users/jameshibbard') .then(res => res.json()) .then(json => console.log(json.public_repos)); console.log("Hello!"); If you run this code, it will output "Hello!" to the screen, then the number of public repos attributed to my GitHub account. This is because fetching data from an API is an asynchronous operation in JavaScript. The JavaScript interpreter will encounter the fetch command and dispatch the request. It will not, however, wait for the request to complete. Rather, it will continue on its way, output "Hello!" to the console, then when the request returns a couple of hundred milliseconds later, it will output the number of repos. If any of this is news to you, you should watch this excellent conference talk: What the heck is the event loop anyway?. You Might Not Actually Need a Sleep Function Now that we have a better understanding of JavaScript's execution model, let's have a look at how JavaScript handles delays and asynchronous operations. Create a Simple Delay Using setTimeout The standard way of creating a delay in JavaScript is to use its setTimeout method. For example: console.log("Hello"); setTimeout(() => { console.log("World!"); }, 2000); This would log "Hello" to the console, then after two seconds "World!" And in many cases, this is enough: do something, wait, then do something else. Sorted! However, please be aware that setTimeout is an asynchronous method. Try altering the previous code like so: console.log("Hello"); setTimeout(() => { console.log("World!"); }, 2000); console.log("Goodbye!"); It will log: Hello Goodbye! World! Waiting for Things with setTimeout It's also possible to use setTimeout (or its cousin setInterval) to keep JavaScript waiting until a condition is met. For example, here's how you might use setTimeout to wait for a certain element to appear on a web page: function pollDOM () { const el = document.querySelector('my-element'); if (el.length) { // Do something with el } else { setTimeout(pollDOM, 300); // try again in 300 milliseconds } } pollDOM(); This assumes the element will turn up at some point. If you're not sure that's the case, you'll need to look at canceling the timer (using clearTimeout or clearInterval). If you'd like to find out more about JavaScript's setTimeout method, please consult our tutorial which has plenty of examples to get you going. The post Delay, Sleep, Pause, & Wait in JavaScript appeared first on SitePoint.

Understanding module.exports and exports in Node.js

In programming, modules are self-contained units of functionality that can be shared and reused across projects. They make our lives as developers easier, as we can use them to augment our applications with functionality that we haven't had to write ourselves. They also allow us to organize and decouple our code, leading to applications that are easier to understand, debug and maintain. In this article, I'll examine how to work with modules in Node.js, focusing on how to export and consume them. Different Module Formats As JavaScript originally had no concept of modules, a variety of competing formats have emerged over time. Here's a list of the main ones to be aware of: The Asynchronous Module Definition (AMD) format is used in browsers and uses a define function to define modules. The CommonJS (CJS) format is used in Node.js and uses require and module.exports to define dependencies and modules. The npm ecosystem is built upon this format. The ES Module (ESM) format. As of ES6 (ES2015), JavaScript supports a native module format. It uses an export keyword to export a module's public API and an import keyword to import it. The System.register format was designed to support ES6 modules within ES5. The Universal Module Definition (UMD) format can be used both in the browser and in Node.js. It's useful when a module needs to be imported by a number of different module loaders. Please be aware that this article deals solely with the CommonJS format, the standard in Node.js. If you'd like to read into any of the other formats, I recommend this article, by SitePoint author Jurgen Van de Moere. Requiring a Module Node.js comes with a set of built-in modules that we can use in our code without having to install them. To do this, we need to require the module using the require keyword and assign the result to a variable. This can then be used to invoke any methods the module exposes. For example, to list out the contents of a directory, you can use the file system module and its readdir method: const fs = require('fs'); const folderPath = '/home/jim/Desktop/'; fs.readdir(folderPath, (err, files) => { files.forEach(file => { console.log(file); }); }); Note that in CommonJS, modules are loaded synchronously and processed in the order they occur. Creating and Exporting a Module Now let's look at how to create our own module and export it for use elsewhere in our program. Start off by creating a user.js file and adding the following: const getName = () => { return 'Jim'; }; exports.getName = getName; Now create an index.js file in the same folder and add this: const user = require('./user'); console.log(`User: ${user.getName()}`); Run the program using node index.js and you should see the following output to the terminal: User: Jim So what has gone on here? Well, if you look at the user.js file, you'll notice that we're defining a getName function, then using the exports keyword to make it available for import elsewhere. Then in the index.js file, we're importing this function and executing it. Also notice that in the require statement, the module name is prefixed with ./, as it's a local file. Also note that there's no need to add the file extension. Exporting Multiple Methods and Values We can export multiple methods and values in the same way: const getName = () => { return 'Jim'; }; const getLocation = () => { return 'Munich'; }; const dateOfBirth = '12.01.1982'; exports.getName = getName; exports.getLocation = getLocation; exports.dob = dateOfBirth; And in index.js: const user = require('./user'); console.log( `${user.getName()} lives in ${user.getLocation()} and was born on ${user.dob}.` ); The code above produces this: Jim lives in Munich and was born on 12.01.1982. Notice how the name we give the exported dateOfBirth variable can be anything we fancy (dob in this case). It doesn't have to be the same as the original variable name. Variations in Syntax I should also mention that it's possible to export methods and values as you go, not just at the end of the file. For example: exports.getName = () => { return 'Jim'; }; exports.getLocation = () => { return 'Munich'; }; exports.dob = '12.01.1982'; And thanks to destructuring assignment, we can cherry-pick what we want to import: const { getName, dob } = require('./user'); console.log( `${getName()} was born on ${dob}.` ); As you might expect, this logs: Jim was born on 12.01.1982. The post Understanding module.exports and exports in Node.js appeared first on SitePoint.

Quick Tip: How to Sort an Array of Objects in JavaScript

If you have an array of objects that you need to sort into a certain order, you might be tempted to reach for a JavaScript library. But before you do, remember that you can do some pretty neat sorting with the native Array.sort function. In this article, we'll show you how to sort an array of objects in JavaScript with no fuss or bother. To follow along, you'll need a knowledge of basic JavaScript concepts, such as declaring variables, writing functions, and conditional statements. We'll also be using ES6 syntax. You can get a refresher on that via our extensive collection of ES6 guides. This popular article was updated in November 2019. Basic Array Sorting By default, the JavaScript Array.sort function converts each element in the array that needs to be sorted into a string, and compares them in Unicode code point order. const foo = [9, 1, 4, 'zebroid', 'afterdeck']; foo.sort(); // returns [ 1, 4, 9, 'afterdeck', 'zebroid' ] const bar = [5, 18, 32, new Set, { user: 'Eleanor Roosevelt' }]; bar.sort(); // returns [ 18, 32, 5, { user: 'Eleanor Roosevelt' }, Set {} ] You may be wondering why 32 comes before 5. Not logical, huh? Well, actually it is. This happens because each element in the array is first converted to a string, and "32" comes before "5" in Unicode order. It’s also worth noting that unlike many other JavaScript array functions, Array.sort actually changes, or mutates the array it sorts. const baz = ['My cat ate my homework', 37, 9, 5, 17]; baz.sort(); // baz array is modified console.log(baz); // shows [ 17, 37, 5, 9, 'My cat ate my homework' ] To avoid this, you can create a new instance of the array to be sorted and modify that instead. This is possible using an array method that returns a copy of the array. For example, Array.slice: const sortedBaz = baz.slice().sort(); // a new instance of the baz array is created and sorted Or if you prefer a newer syntax, you can use the spread operator for the same effect: const sortedBaz = [...baz].sort(); // a new instance of the baz array is created and sorted The output is the same in both cases: console.log(baz); // ['My cat ate my homework', 37, 9, 5, 17]; console.log(sortedBaz); // [ 17, 37, 5, 9, 'My cat ate my homework' ] The post Quick Tip: How to Sort an Array of Objects in JavaScript appeared first on SitePoint.

Remote Work: Tips, Tricks and Best Practices for Success

There are lots of advantages to working away from the office, both for developers and for the companies that employ them. Think about avoiding the daily commute, the cost of office space, the cost of living in or traveling to the city for rural or international workers, the inconvenience of office work for differently abled people or those with unusual family or life responsibilities, and the inflexibility of trying to keep traditional 9–5 hours as more and more of our workforce adapts to the gig economy by taking on second jobs or part-time side hustles. Remote work can help address many of these difficulties while improving team transparency and putting the focus of work back on the reasons you were hired for your job in the first place. It also opens up a world of possibilities for companies, including broader recruitment opportunities, improved worker transparency, lower infrastructure costs, and more scalable business models based on actual worker productivity. But working from home or from a co-working space can also present new challenges, and learning how to recognize them and overcome them can make the difference between a productive, happy work experience and endless hours of misery, loneliness, and frustration. Think I’m being overdramatic? Let me explain. I’ve had the experience of being the remote worker who didn’t think he needed to pay attention to interpersonal office dynamics, or keep track of his time and accomplishments. I’ve worked long into the evening because I didn’t notice when the work day ended. I’ve struggled with inefficient tools that might have worked fine in an office environment, but proved woefully inadequate when it came to remote collaboration. So I’ve learned to cope with these issues myself, and for years I’ve been coaching engineering teams by working on-site, remotely, and in various combinations of the two. Depending on your situation, there are a number of useful tools, tricks, and fundamental practices that can make your remote working experience so much better than it is today — for yourself, your team, your manager, and your company. Remote Self-management For better or for worse, most of us are used to having a manager decide what our working hours are, where we’re going to sit, what equipment we’re going to use, and whom we’re going to collaborate with. That’s a luxury that comes with the convenience of working together in a shared space, where management can supervise and coordinate our efforts. It may not always feel luxurious, but you may well find yourself missing the support of an attentive manager when you start working from home and realize you have to make these decisions for yourself. Set a Schedule and Stick to It! The first tip I offer for anyone starting out a remote role is to establish the hours you’re going to work, and stick to those hours. It’s not as easy as it sounds. When you’re working from home, you won’t have all of the little cues that come with office life to tell you when to pause for lunch, when to take a break, and when to stop working for the day. Working from a co-working space or a coffee shop can help, but it’s not the same as having your colleagues around you to exert that not-so-subtle social pressure. What’s more, if you start to feel anxious about whether people at the office know how hard you’re working, you may find yourself wanting to compensate by putting in a few extra hours. Some people find that it's easier to compartmentalize remote work by using a co-working space, simulating the effect of going out to work and then coming back at the end of the day. If you're working from home, your professional and personal lives can start to blend. You’re going to find yourself washing the dishes, feeding the cat, answering the telephone, and attending to all the other chores that crop up in your living space. And you know what? That’s just fine! … as long as it doesn't start to interfere with your productivity on the job. Decide up front on your morning and afternoon work hours and respect them. Write them down somewhere you won’t forget to see them, so you can’t pretend you don’t know what they are. The same advice applies to teams working together in an office or people using co-working spaces, but it’s even more critical if you're working from home. Let Everyone Know When and Where You'll Be Working Building on the theme of scheduling, a remote worker needs to let anyone who works with them know how to get in touch, and may need to encourage that kind of contact regularly. Remote workers can feel isolated or even excluded — left out of important decisions because people at the office simply forgot about them. It's up to the person who’s working off site to make their existence known throughout the work day, and to advocate for visibility. This can be easier said than done. One of the advantages of remote work is the ability to focus without interruption for extended periods. Sometimes just the knowledge that the bubble of isolation can be broken is enough to foster distraction and make it harder to concentrate. This can make the experience draining and unproductive, and negate most of the advantages. It's not a bad idea to start off just using email to stay in touch with the team for typical group communications. And as a personal productivity tip, try to establish set times during the day to check that email — perhaps three or so over the course of a day. Checking your email constantly can establish a pattern of behavior that puts your attention at the mercy of anyone who wants to reach out to you for anything at any time. Email is asynchronous by nature, so use that to your advantage when you're working from home. Apart from direct communication, it's good to get your team using a messaging tool such as Slack or HipChat. These services can run in the background on every team member's computer, or even on their mobile devices, providing a shared space for inter-team, intra-team, and cross-functional messaging. There are secure ways for companies to make services like these available for sensitive internal communications, and they can work both on site and off site, establishing virtual shared message boards to keep teams aligned. The post Remote Work: Tips, Tricks and Best Practices for Success appeared first on SitePoint.

Create a Toggle Switch in React as a Reusable Component

In this article, we're going to create an iOS-inspired toggle switch using React components. By the end, we'll have built a simple demo React App that uses our custom toggle switch component. We could use third-party libraries for this, but building from scratch allows us to better understand how our code is working and allows us to customize our component completely. Forms provide a major means for enabling user interactions. The checkbox is traditionally used for collecting binary data — such as yes or no, true or false, enable or disable, on or off, etc. Although some modern interface designs steer away from form fields when creating toggle switches, I'll stick with them here due to their greater accessibility. Here's a screenshot of the component we'll be building: Getting Started We can start with a basic HTML checkbox input form element with its necessary properties set: <input type="checkbox" name="name" id="id" /> To build around it, we might need an enclosing <div> with a class, a <label> and the <input /> control itself. Adding everything, we might get something like this: <div class="toggle-switch"> <input type="checkbox" class="toggle-switch-checkbox" name="toggleSwitch" id="toggleSwitch" /> <label class="toggle-switch-label" for="toggleSwitch"> Toggle Me! </label> </div> In time, we can get rid of the label text and use the <label> tag to check or uncheck the checkbox input control. Inside the <label>, let's add two <span>s that help us construct the switch holder and the toggling switch itself: <div class="toggle-switch"> <input type="checkbox" class="toggle-switch-checkbox" name="toggleSwitch" id="toggleSwitch" /> <label class="toggle-switch-label" for="toggleSwitch"> <span class="toggle-switch-inner"></span> <span class="toggle-switch-switch"></span> </label> </div> Converting to a React Component Now that we know what needs to go into the HTML, all we need to do is to convert the HTML into a React component. Let's start with a basic component here. We'll make this a class component, and then we'll convert it into hooks, as it's easier for new developers to follow state than useState: import React, { Component } from "react"; class ToggleSwitch extends Component { render() { return ( <div className="toggle-switch"> <input type="checkbox" className="toggle-switch-checkbox" name="toggleSwitch" id="toggleSwitch" /> <label className="toggle-switch-label" htmlFor="toggleSwitch"> <span className="toggle-switch-inner" /> <span className="toggle-switch-switch" /> </label> </div> ); } } export default ToggleSwitch; At this point, it's not possible to have multiple toggle switch sliders on the same view or same page due to the repetition of ids. We could leverage React's way of componentization here, but in this instance, we'll be using props to dynamically populate the values: import React, { Component } from "react"; class ToggleSwitch extends Component { render() { return ( <div className="toggle-switch"> <input type="checkbox" className="toggle-switch-checkbox" name={this.props.Name} id={this.props.Name} /> <label className="toggle-switch-label" htmlFor={this.props.Name}> <span className="toggle-switch-inner" /> <span className="toggle-switch-switch" /> </label> </div> ); } } export default ToggleSwitch; The this.props.Name will populate the values of id, name and for (note that it is htmlFor in React JS) dynamically, so that you can pass different values to the component and have multiple of them on the same page. Also, the <span> tag doesn't have an ending </span> tag; instead it's closed in the starting tag like <span />, and this is completely fine. The post Create a Toggle Switch in React as a Reusable Component appeared first on SitePoint.

Compile-time Immutability in TypeScript

TypeScript allows us to decorate specification-compliant ECMAScript with type information that we can analyze and output as plain JavaScript using a dedicated compiler. In large-scale projects, this sort of static analysis can catch potential bugs ahead of resorting to lengthy debugging sessions, let alone deploying to production. However, reference types in TypeScript are still mutable, which can lead to unintended side effects in our software. In this article, we'll look at possible constructs where prohibiting references from being mutated can be beneficial. Primitives vs Reference Types JavaScript defines two overarching groups of data types: Primitives: low-level values that are immutable (e.g. strings, numbers, booleans etc.) References: collections of properties, representing identifiable heap memory, that are mutable (e.g. objects, arrays, Map etc.) Say we declare a constant, to which we assign a string: const message = 'hello'; Given that strings are primitives and are thus immutable, we’re unable to directly modify this value. It can only be used to produce new values: console.log(message.replace('h', 'sm')); // 'smello' console.log(message); // 'hello' Despite invoking replace() upon message, we aren't modifying its memory. We're merely creating a new string, leaving the original contents of message intact. Mutating the indices of message is a no-op by default, but will throw a TypeError in strict mode: 'use strict'; const message = 'hello'; message[0] = 'j'; // TypeError: 0 is read-only Note that if the declaration of message were to use the let keyword, we would be able to replace the value to which it resolves: let message = 'hello'; message = 'goodbye'; It's important to highlight that this is not mutation. Instead, we're replacing one immutable value with another. Mutable References Let's contrast the behavior of primitives with references. Let's declare an object with a couple of properties: const me = { name: 'James', age: 29, }; Given that JavaScript objects are mutable, we can change its existing properties and add new ones: me.name = 'Rob'; me.isTall = true; console.log(me); // Object { name: "Rob", age: 29, isTall: true }; Unlike primitives, objects can be directly mutated without being replaced by a new reference. We can prove this by sharing a single object across two declarations: const me = { name: 'James', age: 29, }; const rob = me; rob.name = 'Rob'; console.log(me); // { name: 'Rob', age: 29 } JavaScript arrays, which inherit from Object.prototype, are also mutable: const names = ['James', 'Sarah', 'Rob']; names[2] = 'Layla'; console.log(names); // Array(3) [ 'James', 'Sarah', 'Layla' ] What's the Issue with Mutable References? Consider we have a mutable array of the first five Fibonacci numbers: const fibonacci = [1, 2, 3, 5, 8]; log2(fibonacci); // replaces each item, n, with Math.log2(n); appendFibonacci(fibonacci, 5, 5); // appends the next five Fibonacci numbers to the input array This code may seem innocuous on the surface, but since log2 mutates the array it receives, our fibonacci array will no longer exclusively represent Fibonacci numbers as the name would otherwise suggest. Instead, fibonacci would become [0, 1, 1.584962500721156, 2.321928094887362, 3, 13, 21, 34, 55, 89]. One could therefore argue that the names of these declarations are semantically inaccurate, making the flow of the program harder to follow. Pseudo-immutable Objects in JavaScript Although JavaScript objects are mutable, we can take advantage of particular constructs to deep clone references, namely spread syntax: const me = { name: 'James', age: 29, address: { house: '123', street: 'Fake Street', town: 'Fakesville', country: 'United States', zip: 12345, }, }; const rob = { ...me, name: 'Rob', address: { ...me.address, house: '125', }, }; console.log(me.name); // 'James' console.log(rob.name); // 'Rob' console.log(me === rob); // false The spread syntax is also compatible with arrays: const names = ['James', 'Sarah', 'Rob']; const newNames = [...names.slice(0, 2), 'Layla']; console.log(names); // Array(3) [ 'James', 'Sarah', 'Rob' ] console.log(newNames); // Array(3) [ 'James', 'Sarah', 'Layla' ] console.log(names === newNames); // false Thinking immutably when dealing with reference types can make the behavior of our code clearer. Revisiting the prior mutable Fibonacci example, we could avoid such mutation by copying fibonacci into a new array: const fibonacci = [1, 2, 3, 5, 8]; const log2Fibonacci = [...fibonacci]; log2(log2Fibonacci); appendFibonacci(fibonacci, 5, 5); Rather than placing the burden of creating copies on the consumer, it would be preferable for log2 and appendFibonacci to treat their inputs as read-only, creating new outputs based upon them: const PHI = 1.618033988749895; const log2 = (arr: number[]) => arr.map(n => Math.log2(2)); const fib = (n: number) => (PHI ** n - (-PHI) ** -n) / Math.sqrt(5); const createFibSequence = (start = 0, length = 5) => new Array(length).fill(0).map((_, i) => fib(start + i + 2)); const fibonacci = [1, 2, 3, 5, 8]; const log2Fibonacci = log2(fibonacci); const extendedFibSequence = [...fibonacci, ...createFibSequence(5, 5)]; By writing our functions to return new references in favor of mutating their inputs, the array identified by the fibonacci declaration remains unchanged, and its name remains a valid source of context. Ultimately, this code is more deterministic. The post Compile-time Immutability in TypeScript appeared first on SitePoint.

Getting Started with Puppeteer

Browser developer tools provide an amazing array of options for delving under the hood of websites and web apps. These capabilities can be further enhanced and automated by third-party tools. In this article, we'll look at Puppeteer, a Node-based library for use with Chrome/Chromium. The puppeteer website describes Puppeteer as a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium. Puppeteer is made by the team behind Google Chrome, so you can be pretty sure it will be well maintained. It lets us perform common actions on the Chromium browser, programmatically through JavaScript, via a simple and easy-to-use API. With Puppeteer, you can: scrape websites generate screenshots of websites including SVG and Canvas create PDFs of websites crawl an SPA (single-page application) access web pages and extract information using the standard DOM API generate pre-rendered content — that is, server-side rendering automate form submission automate performance analysis automate UI testing like Cypress test chrome extensions Puppeteer does nothing new that Selenium, PhantomJS (which is now deprecated), and the like do, but it provides a simple and easy-to-use API and provides a great abstraction so we don't have to worry about the nitty-gritty details when dealing with it. It's also actively maintained so we get all the new features of ECMAScript as Chromium supports it. Prerequisites For this tutorial, you need a basic knowledge of JavaScript, ES6+ and Node.js. You must also have installed the latest version of Node.js. We’ll be using yarn throughout this tutorial. If you don’t have yarn already installed, install it from here. To make sure we’re on the same page, these are the versions used in this tutorial: Node 12.12.0 yarn 1.19.1 puppeteer 2.0.0 Installation To use Puppeteer in your project, run the following command in the terminal: $ yarn add puppeteer Note: when you install Puppeteer, it downloads a recent version of Chromium (~170MB macOS, ~282MB Linux, ~280MB Win) that is guaranteed to work with the API. To skip the download, see Environment variables. If you don't need to download Chromium, then you can install puppeteer-core: $ yarn add puppeteer-core puppeteer-core is intended to be a lightweight version of Puppeteer for launching an existing browser installation or for connecting to a remote one. Be sure that the version of puppeteer-core you install is compatible with the browser you intend to connect to. Note: puppeteer-core is only published from version 1.7.0. Usage Puppeteer requires at least Node v6.4.0, but we're going to use async/await, which is only supported in Node v7.6.0 or greater, so make sure to update your Node.js to the latest version to get all the goodies. Let's dive into some practical examples using Puppeteer. In this tutorial, we'll be: generating a screenshot of Unsplash using Puppeteer creating a PDF of Hacker News using Puppeteer signing in to Facebook using Puppeteer 1. Generate a Screenshot of Unsplash using Puppeteer It's really easy to do this with Puppeteer. Go ahead and create a screenshot.js file in the root of your project. Then paste in the following code: const puppeteer = require('puppeteer') const main = async () => { const browser = await puppeteer.launch() const page = await browser.newPage() await page.goto('https://unsplash.com') await page.screenshot({ path: 'unsplash.png' }) await browser.close() } main() Firstly, we require the puppeteer package. Then we call the launch method on it that initializes the instance. This method is asynchronous as it returns a Promise. So we await for it to get the browser instance. Then we call newPage on it and go to Unsplash and take a screenshot of it and save the screenshot as unsplash.png. Now go ahead and run the above code in the terminal by typing: $ node screenshot Now after 5–10 seconds you'll see an unsplash.png file in your project that contains the screenshot of Unsplash. Notice that the viewport is set to 800px x 600px as Puppeteer sets this as the initial page size, which defines the screenshot size. The page size can be customized with Page.setViewport(). Let's change the viewport to be 1920px x 1080px. Insert the following code before the goto method: await page.setViewport({ width: 1920, height: 1080, deviceScaleFactor: 1, }) Now go ahead and also change the filename from unsplash.png to unsplash2.png in the screenshot method like so: await page.screenshot({ path: 'unsplash2.png' }) The whole screenshot.js file should now look like this: const puppeteer = require('puppeteer') const main = async () => { const browser = await puppeteer.launch() const page = await browser.newPage() await page.setViewport({ width: 1920, height: 1080, deviceScaleFactor: 1, }) await page.goto('https://unsplash.com') await page.screenshot({ path: 'unsplash2.png' }) await browser.close() } main() The post Getting Started with Puppeteer appeared first on SitePoint.

Getting Started with the React Native Navigation Library

One of the most important aspects of React Native app development is the navigation. It’s what allows users to get to the pages they’re looking for. That’s why it’s important to choose the best navigation library to suit your needs. If your app has a lot of screens with relatively complex UI, it might be worth exploring React Native Navigation instead of React Navigation. This is because there will always be performance bottlenecks with React Navigation, since it works off the same JavaScript thread as the rest of the app. The more complex your UI, the more data has to be passed to that bridge, which can potentially slow it down. In this tutorial, we’ll be looking at the React Native Navigation library by Wix, an alternative navigation library for those who are looking for a smoother navigation performance for their React Native apps. Prerequisites Knowledge of React and React Native is required to follow this tutorial. Prior experience with a navigation library such as React Navigation is optional. App Overview In order to demonstrate how to use the library, we’ll be creating a simple app that uses it. The app will have five screens in total: Initialization: this serves as the initial screen for the app. If the user is logged in, it will automatically navigate to the home screen. If not, the user is navigated to the login screen. Login: this allows the user to log in so they can view the home, gallery, and feed. To simplify things, the login will just be mocked; no actual authentication code will be involved. From this screen, the user can also go to the forgot-password screen. ForgotPassword: a filler screen, which asks for the user’s email address. This will simply be used to demonstrate stack navigation. Home: the initial screen that the user will see when they log in. From here, they can also navigate to either the gallery or feed screens via a bottom tab navigation. Gallery: a filler screen which shows a photo gallery UI. Feed: a filler screen which shows a news feed UI. Here’s what the app will look like: You can find the source code of the sample app on this GitHub repo. Bootstrapping the App Let’s start by generating a new React Native project: react-native init RNNavigation --version react-native@0.57.8 Note: we’re using a slightly older version of React Native, because React Native Navigation doesn’t work well with later versions of React Native. React Native Navigation hasn’t really kept up with the changes in the core of React Native since version 0.58. The only version known to work flawlessly with React Native is the version we’re going to use. If you check the issues on their repo, you’ll see various issues on version 0.58 and 0.59. There might be workarounds on those two versions, but the safest bet is still version 0.57. As for React Native version 0.60, the core team has made a lot of changes. One of them is the migration to AndroidX, which aims to make it clearer which packages are bundled with the Android operating system. This essentially means that if a native module uses any of the old packages that got migrated to the new androidx.&ast; package hierarchy, it will break. There are tools such as jetifier, which allows for migration to AndroidX. But this doesn’t ensure React Native Navigation will work. Next, install the dependencies of the app: react-native-navigation — the navigation library that we’re going to use. @react-native-community/async-storage — for saving data to the app’s local storage. react-native-vector-icons — for showing icons for the bottom tab navigation. yarn add react-native-navigation @react-native-community/async-storage react-native-vector-icons In the next few sections, we’ll be setting up the packages we just installed. Setting up React Native Navigation First, we’ll set up the React Native Navigation library. The instructions that we’ll be covering here are also in the official documentation. Unfortunately, it’s not written in a very friendly way for beginners, so we’ll be covering it in more detail. Note: the demo project includes an Android and iOS folders as well. You can use those as a reference if you encounter any issues with setting things up. Since the name of the library is very long, I’ll simply refer to it as RNN from now on. Android Setup In this section, we’ll take a look at how you can set up RNN for Android. Before you proceed, it’s important to update all the SDK packages to the latest versions. You can do that via the Android SDK Manager. settings.gradle Add the following to your android/settings.gradle file: include ':react-native-navigation' project(':react-native-navigation').projectDir = new File(rootProject.projectDir, '../node_modules/react-native-navigation/lib/android/app/') Gradle Wrapper Properties In your android/gradle/wrapper/gradle-wrapper.properties, update Gradle’s distributionUrl to use version 4.4 if it’s not already using it: distributionUrl=https\://services.gradle.org/distributions/gradle-4.4-all.zip build.gradle Next, in your android/build.gradle file, add mavenLocal() and mavenCentral() under buildscript -> repositories: buildscript { repositories { google() jcenter() // add these: mavenLocal() mavenCentral() } } Next, update the classpath under the buildscript -> dependencies to point out to the Gradle version that we need: buildscript { repositories { ... } dependencies { classpath 'com.android.tools.build:gradle:3.0.1' } } Under allprojects -> repositories, add mavenCentral() and JitPack. This allows us to pull the data from React Native Navigation’s JitPack repository: allprojects { allprojects { repositories { mavenLocal() google() jcenter() mavenCentral() // add this maven { url 'https://jitpack.io' } // add this } } Next, add the global config for setting the build tools and SDK versions for Android: allprojects { ... } ext { buildToolsVersion = "27.0.3" minSdkVersion = 19 compileSdkVersion = 26 targetSdkVersion = 26 supportLibVersion = "26.1.0" } Lastly, we’d still want to keep the default react-native run-android command when compiling the app, so we have to set Gradle to ignore other flavors of React Native Navigation except the one we’re currently using (reactNative57_5). Ignoring them ensures that we only compile the specific version we’re depending on: ext { ... } subprojects { subproject -> afterEvaluate { if ((subproject.plugins.hasPlugin('android') || subproject.plugins.hasPlugin('android-library'))) { android { variantFilter { variant -> def names = variant.flavors*.name if (names.contains("reactNative51") || names.contains("reactNative55") || names.contains("reactNative56") || names.contains("reactNative57")) { setIgnore(true) } } } } } } Note: there are four other flavors of RNN that currently exist. These are the ones we’re ignoring above: reactNative51 reactNative55 reactNative56 reactNative57 android/app/build.gradle On your android/app/build.gradle file, under android -> compileOptions, make sure that the source and target compatibility version is 1.8: android { defaultConfig { ... } compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } } Then, in your dependencies, include react-native-navigation as a dependency: dependencies { implementation fileTree(dir: "libs", include: ["*.jar"]) implementation "com.android.support:appcompat-v7:${rootProject.ext.supportLibVersion}" implementation "com.facebook.react:react-native:+" implementation project(':react-native-navigation') // add this } Lastly, under android -> defaultConfig, set the missingDimensionStrategy to reactNative57_5. This is the version of RNN that’s compatible with React Native 0.57.8: defaultConfig { applicationId "com.rnnavigation" minSdkVersion rootProject.ext.minSdkVersion targetSdkVersion rootProject.ext.targetSdkVersion missingDimensionStrategy "RNN.reactNativeVersion", "reactNative57_5" // add this versionCode 1 versionName "1.0" ndk { abiFilters "armeabi-v7a", "x86" } } The post Getting Started with the React Native Navigation Library appeared first on SitePoint.

How TypeScript Makes You a Better JavaScript Developer

What do Airbnb, Google, Lyft and Asana have in common? They've all migrated several codebases to TypeScript. Whether it is eating healthier, exercising, or sleeping more, our humans love self-improvement. The same applies to our careers. If someone shared tips for improving as a programmer, your ears would perk. In this article, the goal is to be that someone. We know TypeScript will make you a better JavaScript developer for several reasons. You'll feel confident when writing code. Fewer errors will appear in your production code. It will be easier to refactor code. You'll write fewer tests (yay!). And overall, you'll have a better coding experience in your editor. What Even Is TypeScript? TypeScript is a compiled language. You write TypeScript and it compiles to JavaScript. Essentially, you're writing JavaScript, but with a type system. JavaScript developers should have a seamless transition because the languages are the same, except for a few quirks. Here's a basic example of a function in both JavaScript and TypeScript: function helloFromSitePoint(name) { return `Hello, ${name} from SitePoint!` } function helloFromSitePoint(name: string) { return `Hello, ${name} from SitePoint!` } Notice how the two are almost identical. The difference is the type annotation on the "name" parameter in TypeScript. This tells the compiler, "Hey, make sure when someone calls this function, they only pass in a string." We won't go into much depth but this example should illustrate the bare minimal of TypeScript. How Will TypeScript Make Me Better? TypeScript will improve your skills as a JavaScript developer by: giving you more confidence, catching errors before they hit production, making it easier to refactor code, saving you time from writing tests, providing you with a better coding experience. Let's explore each of these a bit deeper. The post How TypeScript Makes You a Better JavaScript Developer appeared first on SitePoint.

Face Detection and Recognition with Keras

If you're a regular user of Google Photos, you may have noticed how the application automatically extracts and groups faces of people from the photos that you back up to the cloud. Face Recognition in the Google Photos web application A photo application such as Google's achieves this through the detection of faces of humans (and pets too!) in your photos and by then grouping similar faces together. Detection and then classification of faces in images is a common task in deep learning with neural networks. In the first step of this tutorial, we'll use a pre-trained MTCNN model in Keras to detect faces in images. Once we've extracted the faces from an image, we'll compute a similarity score between these faces to find if they belong to the same person. Prerequisites Before you start with detecting and recognizing faces, you need to set up your development environment. First, you need to "read" images through Python before doing any processing on them. We'll use the plotting library matplotlib to read and manipulate images. Install the latest version through the installer pip: pip3 install matplotlib To use any implementation of a CNN algorithm, you need to install keras. Download and install the latest version using the command below: pip3 install keras The algorithm that we'll use for face detection is MTCNN (Multi-Task Convoluted Neural Networks), based on the paper Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks (Zhang et al., 2016). An implementation of the MTCNN algorithm for TensorFlow in Python3.4 is available as a package. Run the following command to install the package through pip: pip3 install mtcnn To compare faces after extracting them from images, we'll use the VGGFace2 algorithm developed by the Visual Geometry Group at the University of Oxford. A TensorFlow-based Keras implementation of the VGG algorithm is available as a package for you to install: pip3 install keras_vggface While you may feel the need to build and train your own model, you'd need a huge training dataset and vast processing power. Since this tutorial focuses on the utility of these models, it uses existing, trained models by experts in the field. Now that you've successfully installed the prerequisites, let's jump right into the tutorial! Step 1: Face Detection with the MTCNN Model The objectives in this step are as follows: retrieve images hosted externally to a local server read images through matplotlib's imread() function detect and explore faces through the MTCNN algorithm extract faces from an image. 1.1 Store External Images You may often be doing an analysis from images hosted on external servers. For this example, we'll use two images of Lee Iacocca, the father of the Mustang, hosted on the BBC and The Detroit News sites. To temporarily store the images locally for our analysis, we'll retrieve each from its URL and write it to a local file. Let's define a function store_image for this purpose: import urllib.request def store_image(url, local_file_name): with urllib.request.urlopen(url) as resource: with open(local_file_name, 'wb') as f: f.write(resource.read()) You can now simply call the function with the URL and the local file in which you'd like to store the image: store_image('https://ichef.bbci.co.uk/news/320/cpsprodpb/5944/production/_107725822_55fd57ad-c509-4335-a7d2-bcc86e32be72.jpg', 'iacocca_1.jpg') store_image('https://www.gannett-cdn.com/presto/2019/07/03/PDTN/205798e7-9555-4245-99e1-fd300c50ce85-AP_080910055617.jpg?width=540&height=&fit=bounds&auto=webp', 'iacocca_2.jpg') After successfully retrieving the images, let's detect faces in them. 1.2 Detect Faces in an Image For this purpose, we'll make two imports — matplotlib for reading images, and mtcnn for detecting faces within the images: from matplotlib import pyplot as plt from mtcnn.mtcnn import MTCNN Use the imread() function to read an image: image = plt.imread('iacocca_1.jpg') Next, initialize an MTCNN() object into the detector variable and use the .detect_faces() method to detect the faces in an image. Let's see what it returns: detector = MTCNN() faces = detector.detect_faces(image) for face in faces: print(face) For every face, a Python dictionary is returned, which contains three keys. The box key contains the boundary of the face within the image. It has four values: x- and y- coordinates of the top left vertex, width, and height of the rectangle containing the face. The other keys are confidence and keypoints. The keypoints key contains a dictionary containing the features of a face that were detected, along with their coordinates: {'box': [160, 40, 35, 44], 'confidence': 0.9999798536300659, 'keypoints': {'left_eye': (172, 57), 'right_eye': (188, 57), 'nose': (182, 64), 'mouth_left': (173, 73), 'mouth_right': (187, 73)}} 1.3 Highlight Faces in an Image Now that we've successfully detected a face, let's draw a rectangle over it to highlight the face within the image to verify if the detection was correct. To draw a rectangle, import the Rectangle object from matplotlib.patches: from matplotlib.patches import Rectangle Let's define a function highlight_faces to first display the image and then draw rectangles over faces that were detected. First, read the image through imread() and plot it through imshow(). For each face that was detected, draw a rectangle using the Rectangle() class. Finally, display the image and the rectangles using the .show() method. If you're using Jupyter notebooks, you may use the %matplotlib inline magic command to show plots inline: def highlight_faces(image_path, faces): # display image image = plt.imread(image_path) plt.imshow(image) ax = plt.gca() # for each face, draw a rectangle based on coordinates for face in faces: x, y, width, height = face['box'] face_border = Rectangle((x, y), width, height, fill=False, color='red') ax.add_patch(face_border) plt.show() Let's now display the image and the detected face using the highlight_faces() function: highlight_faces('iacocca_1.jpg', faces) Detected face in an image of Lee Iacocca. Source: BBC Let's display the second image and the face(s) detected in it: image = plt.imread('iacocca_2.jpg') faces = detector.detect_faces(image) highlight_faces('iacocca_2.jpg', faces) The Detroit News In these two images, you can see that the MTCNN algorithm correctly detects faces. Let's now extract this face from the image to perform further analysis on it. 1.4 Extract Face for Further Analysis At this point, you know the coordinates of the faces from the detector. Extracting the faces is a fairly easy task using list indices. However, the VGGFace2 algorithm that we use needs the faces to be resized to 224 x 224 pixels. We'll use the PIL library to resize the images. The function extract_face_from_image() extracts all faces from an image: from numpy import asarray from PIL import Image def extract_face_from_image(image_path, required_size=(224, 224)): # load image and detect faces image = plt.imread(image_path) detector = MTCNN() faces = detector.detect_faces(image) face_images = [] for face in faces: # extract the bounding box from the requested face x1, y1, width, height = face['box'] x2, y2 = x1 + width, y1 + height # extract the face face_boundary = image[y1:y2, x1:x2] # resize pixels to the model size face_image = Image.fromarray(face_boundary) face_image = face_image.resize(required_size) face_array = asarray(face_image) face_images.append(face_array) return face_images extracted_face = extract_face_from_image('iacocca_1.jpg') # Display the first face from the extracted faces plt.imshow(extracted_face[0]) plt.show() Here is how the extracted face looks from the first image. Extracted and resized face from first image The post Face Detection and Recognition with Keras appeared first on SitePoint.

React Native End-to-end Testing and Automation with Detox

Detox is an end-to-end testing and automation framework that runs on a device or a simulator, just like an actual end user. Software development demands fast responses to user and/or market needs. This fast development cycle can result (sooner or later) in parts of a project being broken, especially when the project grows so large. Developers get overwhelmed with all the technical complexities of the project, and even the business people start to find it hard to keep track of all scenarios the product caters for. In this scenario, there’s a need for software to keep on top of the project and allow us to deploy with confidence. But why end-to-end testing? Aren’t unit testing and integration testing enough? And why bother with the complexity that comes with end-to-end testing? First of all, the complexity issue has been tackled by most of the end-to-end frameworks, to the extent that some tools (whether free, paid or limited) allow us to record the test as a user, then replay it and generate the necessary code. Of course, that doesn’t cover the full range of scenarios that you’d be able to address programmatically, but it’s still a very handy feature. Want to learn React Native from the ground up? This article is an extract from our Premium library. Get an entire collection of React Native books covering fundamentals, projects, tips and tools & more with SitePoint Premium. Join now for just $9/month. End-to-end Integration and Unit Testing End-to-end testing versus integration testing versus unit testing: I always find the word “versus” drives people to take camps — as if it’s a war between good and evil. That drives us to take camps instead of learning from each other and understanding the why instead of the how. The examples are countless: Angular versus React, React versus Angular versus Vue, and even more, React versus Angular versus Vue versus Svelte. Each camp trash talks the other. jQuery made me a better developer by taking advantage of the facade pattern $('') to tame the wild DOM beast and keep my mind on the task at hand. Angular made me a better developer by taking advantage of componentizing the reusable parts into directives that can be composed (v1). React made me a better developer by taking advantage of functional programming, immutability, identity reference comparison, and the level of composability that I don’t find in other frameworks. Vue made me a better developer by taking advantage of reactive programming and the push model. I could go on and on, but I’m just trying to demonstrate the point that we need to concentrate more on the why: why this tool was created in the first place, what problems it solves, and whether there are other ways of solving the same problems. As You Go Up, You Gain More Confidence As you go more on the spectrum of simulating the user journey, you have to do more work to simulate the user interaction with the product. But on the other hand, you get the most confidence because you’re testing the real product that the user interacts with. So, you catch all the issues—whether it’s a styling issue that could cause a whole section or a whole interaction process to be invisible or non interactive, a content issue, a UI issue, an API issue, a server issue, or a database issue. You get all of this covered, which gives you the most confidence. Why Detox? We discussed the benefit of end-to-end testing to begin with and its value in providing the most confidence when deploying new features or fixing issues. But why Detox in particular? At the time of writing, it’s the most popular library for end-to-end testing in React Native and the one that has the most active community. On top of that, it’s the one React Native recommends in its documentation. The Detox testing philosophy is “gray-box testing”. Gray-box testing is testing where the framework knows about the internals of the product it’s testing.In other words, it knows it’s in React Native and knows how to start up the application as a child of the Detox process and how to reload it if needed after each test. So each test result is independent of the others. Prerequisites macOS High Sierra 10.13 or above Xcode 10.1 or above Homebrew: /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Node 8.3.0 or above: brew update && brew install node Apple Simulator Utilities: brew tap wix/brew and brew install applesimutils Detox CLI 10.0.7 or above: npm install -g detox-cli See the Result in Action First, let’s clone a very interesting open-source React Native project for the sake of learning, then add Detox to it: git clone https://github.com/ahmedam55/movie-swiper-detox-testing.git cd movie-swiper-detox-testing npm install react-native run-ios Create an account on The Movie DB website to be able to test all the application scenarios. Then add your username and password in .env file with usernamePlaceholder and passwordPlaceholder respectively: isTesting=true username=usernamePlaceholder password=passwordPlaceholder After that, you can now run the tests: detox test Note that I had to fork this repo from the original one as there were a lot of breaking changes between detox-cli, detox, and the project libraries. Use the following steps as a basis for what to do: Migrate it completely to latest React Native project. Update all the libraries to fix issues faced by Detox when testing. Toggle animations and infinite timers if the environment is testing. Add the test suite package. Setup for New Projects Add Detox to Our Dependencies Go to your project’s root directory and add Detox: npm install detox --save-dev Configure Detox Open the package.json file and add the following right after the project name config. Be sure to replace movieSwiper in the iOS config with the name of your app. Here we’re telling Detox where to find the binary app and the command to build it. (This is optional. We can always execute react-native run-ios instead.) Also choose which type of simulator: ios.simulator, ios.none, android.emulator, or android.attached. And choose which device to test on: { "name": "movie-swiper-detox-testing", // add these: "detox": { "configurations": { "ios.sim.debug": { "binaryPath": "ios/build/movieSwiper/Build/Products/Debug-iphonesimulator/movieSwiper.app", "build": "xcodebuild -project ios/movieSwiper.xcodeproj -scheme movieSwiper -configuration Debug -sdk iphonesimulator -derivedDataPath ios/build", "type": "ios.simulator", "name": "iPhone 7 Plus" } } } } Here’s a breakdown of what the config above does: Execute react-native run-ios to create the binary app. Search for the binary app at the root of the project: find . -name "*.app". Put the result in the build directory. Before firing up the test suite, make sure the device name you specified is available (for example, iPhone 7). You can do that from the terminal by executing the following: xcrun simctl list Here’s what it looks like: Now that weve added Detox to our project and told it which simulator to start the application with, we need a test runner to manage the assertions and the reporting—whether it’s on the terminal or otherwise. Detox supports both Jest and Mocha. We’ll go with Jest, as it has bigger community and bigger feature set. In addition to that, it supports parallel test execution, which could be handy to speed up the end-to-end tests as they grow in number. Adding Jest to Dev Dependencies Execute the following to install Jest: npm install jest jest-cli --save-dev The post React Native End-to-end Testing and Automation with Detox appeared first on SitePoint.

How to Build Your First Amazon Alexa Skill

Out of the box, Alexa supports a number of built-in skills, such as adding items to your shopping list or requesting a song. However, developers can build new custom skills by using the Alexa Skill Kit (ASK). The ASK, a collection of APIs and tools, handles the hard work related to voice interfaces, including speech recognition, text-to-speech encoding, and natural language processing. ASK helps developers build skills quickly and easily. In short, the sole reason that Alexa can understand a user’s voice commands is that it has skills defined. Every Alexa skill is a piece of software designed to understand voice commands. Also, each Alexa skill has its own logic defined that creates an appropriate response for the voice command. To give you an idea of some existing Alexa skills, they include: ordering pizza at Domino's Pizza calling for an Uber telling you your horoscope So as said, we can develop our own custom skills fitted to our need with the Alexa Skill Kit, a collection of APIs and tools designed for this purpose. The ASK includes tools like speech recognition, text-to-speech encoding, and natural language processing. The kit should get any developer started quickly with developing their own custom skill. In this article, you’ll learn how to create a basic "get a fact" Alexa skill. In short, we can ask Alexa to present us with a random cat fact. The complete code for completing our task can be found on GitHub. Before we get started, let's make sure we understand the Alexa skill terminology. Mastering Alexa Skill Terminology First, let's learn how a user can interact with a custom skill. This will be important for understanding the different concepts related to skills. In order to activate a particular skill, the user has to call Alexa and ask to open a skill. For example: "Alexa, open cat fact". By doing this, we're calling the invocation name of the skill. Basically, the invocation name can be seem as the name of the application. Now that we've started the right skill, we have access to the voice intents/commands the skill understands. As we want to keep things simple, we define a "Get Cat Fact" intent. However, we need to provide sample sentences to trigger the intent. An intent can be triggered by many example sentences, also called utterances. For example, a user might say "Give a fact". Therefore, we define the following example sentences: "Tell a fact" "Give a cat fact" "Give a fact" It's even possible to combine the invocation name with an intent like this: "Alexa, ask Cat Fact to give a fact". Now that we know the difference between an invocation name and intent, let's move on to creating your first Alexa skill. Creating an Amazon Developer Account To get started, we need an Amazon Developer Account. If you have one, you can skip this section. Signing up for an Amazon Developer account is a three-step process. Amazon requires some personal information, accepting the terms of service, and providing a payment method. The advantage of signing up for an Amazon Developer account is that you get access to a plethora of other Amazon services. Once the signup has been successfully completed, you'll see the Amazon Developer dashboard. Log yourself in to the dashboard and click on the Developer Console button in the top-right corner. Next up, we want to open the Alexa Skills Kit. If you were unable to open the Alexa Skills Kit, use this link. In the following section, we'll create our actual skill. Creating Our First Custom Alexa Skill Okay, we're set to create our first custom Alexa skill. Click the blue button Create Skill to open up the menu for creating a new skill. Firstly, it will prompt us for the name of our skill. As you already know, we want random cat facts and therefore call the skill custom cat fact (we can't use cat fact as that's a built-in skill for Alexa devices). Next, it prompts us to pick a model for your skill. We can choose between some predefined models or go for a custom model that gives us full flexibility. As we don't want to be dealing with code we don't need, we go for the Custom option. Note: If you choose a predefined skill, you get a list of interaction models and example sentences (utterances). However, even the custom skill is equipped with the most basic intents like Cancel, Help, NavigateHome, and Stop. Next, we need to pick a way to host our skill. Again, we don't want to overcomplicate things and pick the Alexa-Hosted (Node.js) option. This means we don't have to run a back end ourselves that requires some effort to make it "Alexa compliant". It means you have to format the response according to the Amazon Alexa standards for a device to understand this. The Alexa-hosted option will: host skills in your account up to the AWS Free Tier limits and get you started with a Node.js template. You will gain access to an AWS Lambda endpoint, 5 GB of media storage with 15 GB of monthly data transfer, and a table for session persistence. Okay, now that all settings are in place, you can click the Create Skill button in the top-right corner of the screen. This button will generate the actual skill in our Amazon Developer account. Modifying Your First Alexa Skill Now if you navigate to the Alexa Developer Console, you'll find your skill listed there. Click the edit button to start modifying the skill. Next, Amazon will display the build tab for the Cat Fact skill. On the left-hand side, you'll find a list of intents that are defined for the skill. As said before, by default the Alexa Skills Kit generates a Cancel, Stop, Help, and NavigateHome intent. The first three are helpful for a user that wants to quit the skill or doesn't know how to use it. The last one, NavigateHome, is only used for complex skills that involve multiple steps. Step 1: Verify Invocation Name First of all, let's verify if the invocation name for the skill is correct. The name should say "custom cat fact". In case you change the name, make sure to hit the Save Model button on top of the page. The post How to Build Your First Amazon Alexa Skill appeared first on SitePoint.

How to Build a Web App with GraphQL and React

In this tutorial, we'll learn to build a web application with React and GraphQL. We'll consume an API available from graphql-pokemon and serve it from this link, which allows you to get information about Pokémon. GraphQL is a query language for APIs and a runtime for fulfilling those queries created by Facebook. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. In this tutorial, we'll only learn the front end of a GraphQL application that makes use of Apollo for fetching data from a ready GraphQL API hosted on the web. Let's get started with the prerequisites! Prerequisites There are a few prerequisites for this tutorial: recent versions of Node.js and npm installed on your system knowledge of JavaScript/ES6 familiarity with React If you don't have Node and npm installed on your development machine, you can simply download the binaries for your system from the official website. You can also use NVM, a POSIX-compliant bash script to manage multiple active Node.js versions. Installing create-react-app Let's install the create-react-app tool that allows you to quickly initialize and work with React projects. Open a new terminal and run the following command: npm install -g create-react-app Note: You may need to use sudo before your command in Linux and macOS or use a command prompt with administrator rights if you get EACCESS errors when installing the package globally on your machine. You can also simply fix your npm permissions. At the time of writing, this installs create-react-app v3.1.1. Creating a React Project Now we're ready to create our React project. Go back to your terminal and run the following command: create-react-app react-pokemon Next, navigate into your project's folder and start the local development server: cd react-pokemon npm start Go to http://localhost:3000 in your web browser to see your app up and running. This is a screenshot of the app at this point: Installing Apollo Client Apollo Client is a complete data management solution that's commonly used with React, but can be used with any other library or framework. Apollo provides intelligent caching that enables it to be a single source of truth for the local and remote data in your application. You'll need to install the following packages in your React project to work with Apollo: graphql: the JavaScript reference implementation for GraphQL apollo-client: a fully-featured caching GraphQL client with integrations for React, Angular, and more apollo-cache-inmemory: the recommended cache implementation for Apollo Client 2.0 apollo-link-http: the most common Apollo Link, a system of modular components for GraphQL networking react-apollo: this package allows you to fetch data from your GraphQL server and use it in building complex and reactive UIs using the React framework graphql-tag: this package provides helpful utilities for parsing GraphQL queries such as gql tag. Open a new terminal and navigate to your project's folder, then run the following commands: npm install graphql --save npm install apollo-client --save npm install apollo-cache-inmemory --save npm install apollo-link-http --save npm install react-apollo --save npm install graphql-tag --save Now that we've installed the necessary packages, we need to create an instance of ApolloClient. Open the src/index.js file and add the following code: import { ApolloClient } from 'apollo-client'; import { InMemoryCache } from 'apollo-cache-inmemory'; import { HttpLink } from 'apollo-link-http'; const cache = new InMemoryCache(); const link = new HttpLink({ uri: 'https://graphql-pokemon.now.sh/' }) const client = new ApolloClient({ cache, link }) We first create an instance of InMemoryCache, then an instance of HttpLink and we pass in our GraphQL API URI. Next, we create an instance of ApolloClient and we provide the cache and link instances. Connect the Apollo Client to React Components After creating the instance of ApolloClient, we need to connect it to our React component(s). We'll use the new Apollo hooks, which allows us to easily bind GraphQL operations to our UI. We can connect Apollo Client to our React app by simply wrapping the root App component with the ApolloProvider component — which is exported from the @apollo/react-hooks package — and passing the client instance via the client prop. The ApolloProvider component is similar to React's Context provider. It wraps your React app and places the client in the context, which enables you to access it from anywhere in your app. Now let's import the ApolloProvider component in our src/index.js file and wrap the App component as follows: The post How to Build a Web App with GraphQL and React appeared first on SitePoint.

How to Build Your First Discord Bot with Node.js

Nowadays, bots are being used for automating various tasks. Since the release of Amazon's Alexa devices, the hype surrounding automation bots has only started to grow. Besides Alexa, other communication tools like Discord and Telegram offer APIs to develop custom bots. This article will solely focus on creating your first bot with the exposed Discord API. Maybe the most well-known Discord bot is the Music Bot. The music bot lets you type a song name and the bot will attach a new user to your channel who plays the requested song. It’s a commonly used bot among younger people on gaming or streaming servers. Let’s get started with creating a custom Discord bot. Prerequisites Node.js v10 or higher installed (basic knowledge) a Discord account and Discord client basic knowledge of using a terminal Step 1: Setup Test Server First of all, we need a test server on which we can later test our Discord bot. We can create a new server by clicking the plus icon in the left bottom corner. A pop-up will be displayed that asks you if you want to join a server or create a new one. Of course, we want to create a new server. Next, we need to input the name for our server. To keep things simple, I've named the server discord_playground. If you want, you can change the server location depending on where you're located to get a better ping. If everything went well, you should see your newly created server. Step 2: Generating Auth Token When we want to control our bot via code, we need to register the bot first under our Discord account. To register the bot, go to the Discord Developers Portal and log in with your account. After logging in, you should be able to see the dashboard. Let's create a new application by clicking the New Application button. Next, you'll see a pop-up that asks you to input a name for your application. Let's call our bot my-greeter-bot. By clicking the Create button, Discord will create an API application. When the application has been created, you'll see the overview of the newly created my-greeter-bot application. You'll see information like a client ID and client secret. This secret will be used later as the authorization token. Now, click on the Bot menu option in the Settings menu. Discord will build our my-greeter-bot application and add a bot user to it. When the bot has been built, you get an overview of your custom bot. Take a look at the Token section. Copy this authorization token and write it down somewhere, as we'll need it later to connect to our bot user. The post How to Build Your First Discord Bot with Node.js appeared first on SitePoint.