跟随,学习,进步

Scott Hanselman

Scott Hanselman

https://www.hanselman.com/blog/

Scott Hanselman on Programming, The Web, Open Source, .NET, The Cloud and More

转到作者网站

I miss Microsoft Encarta

Microsoft Encarta came out in 1993 and was one of the first CD-ROMs I had. It stopped shipping in 2009 on DVD. I recently found a disk and was impressed that it installed just perfectly on my latest Window 10 machine and runs nicely. Encarta existed in an interesting place between the rise of the internet and computer's ability to deal with (at the time) massive amounts of data. CD-ROMs could bring us 700 MEGABYTES which was unbelievable when compared to the 1.44MB (or even 120KB) floppy disks we were used to. The idea that Encarta was so large that it was 5 CD-ROMs (!) was staggering, even though that's just a few gigs today. Even a $5 USB stick could hold Encarta - twice! My kids can't possibly intellectualize the scale that data exists in today. We could barely believe that a whole bookshelf of Encyclopedias was now in our pockets. I spent hours and hours just wandering around random articles in Encarta. The scope of knowledge was overwhelming, but accessible. But it was contained - it was bounded. Today, my kids just assume that the sum of all human knowledge is available with a single search or a "hey Alexa" so the world's mysteries are less mysteries and they become bored by the Paradox of Choice. In a world of 4k streaming video, global wireless, and high-speed everything, there's really no analog to the feeling we got watching the Moon Landing as a video in Encarta - short of watching it live on TV in the 1969! For most of us, this was the first time we'd ever seen full-motion video on-demand on a computer in any sort of fidelity - and these are mostly 320x240 or smaller videos! A generation of us grew up hearing MLK's "I have a dream" speech inside Microsoft Encarta! Remember the Encarta "So, you wanna play some Basketball" Video? Amazed by Google Earth? You never saw the globe in Encarta. You'll be perhaps surprised to hear that the Encarta Timeline works even today on across THREE 4k monitors at nearly 10,000 pixels across! This was a product that was written over 10 years ago and could never have conceived of that many pixels. It works great! Most folks at Microsoft don't realize that Encarta exists and is used TODAY all over the developing world on disconnected or occasionally connected computers. (Perhaps Microsoft could make the final version of Encarta available for a free final download so that we might avoid downloading illegal or malware invested versions?) What are your fond memories of Encarta? If you're not of the Encarta generation, what's your impression of it? Had you heard or thought of it? Sponsor: Develop Xamarin applications without difficulty with the latest JetBrains Rider: Xcode integration, JetBrains Xamarin SDK, and manage the required SDKs for Android development, all right from the IDE. Get it today!© 2019 Scott Hanselman. All rights reserved.


The PICO-8 Virtual Fantasy Console is an idealized constrained modern day game maker

I love everything about PICO-8. It's a fantasy gaming console that wants you - and the kids in your life and everyone you know - to make games! How cool is that? You know the game Celeste? It's available on every platform, has one every award and is generally considered a modern-day classic. Well the first version was made on PICO-8 in 4 days as a hackathon project and you can play it here online. Here's the link when they launched in 4 years ago on the forums. They pushed the limits, as they call out "We used pretty much all our resources for this. 8186/8192 code, the entire spritemap, the entire map, and 63/64 sounds." How far could one go? Wolf3D even? "A fantasy console is like a regular console, but without the inconvenience of actual hardware. PICO-8 has everything else that makes a console a console: machine specifications and display format, development tools, design culture, distribution platform, community and playership. It is similar to a retro game emulator, but for a machine that never existed. PICO-8's specifications and ecosystem are instead designed from scratch to produce something that has it's own identity and feels real. Instead of physical cartridges, programs made for PICO-8 are distributed on .png images that look like cartridges, complete with labels and a fixed 32k data capacity." What a great start and great proof that you can make an amazing game in a small space. If you loved GameBoys and have fond memories of GBA and other small games, you'll love PICO-8. How to play PICO-8 cartridges If you just want to explore, you can go to https://www.lexaloffle.com and just play in your browser! PICO-8 is a "fantasy console" that doesn't exist physically (unless you build one, more on that later). If you want to develop cartridges and play locally, you can buy the whole system (any platform) for $14.99, which I have. If you have Windows and Chrome or New Edge you can just plug in your Xbox Controller with a micro-USB cable and visit https://www.lexaloffle.com/pico-8.php and start playing now! It's amazing - yes I know how it works but it's still amazing - to me to be able to play a game in a web browser using a game controller. I guess I'm easily impressed. It wasn't very clear to me how to load and play any cartridge LOCALLY. For example, I can play Demon Castle here on the Forums but how do I play it locally and later, offline? The easy way is to run PICO-8 and hit ESC to get their command line. Then I type LOAD #cartid where #cartid is literally the id of the cartridge on the forums. In the case of Demon Castle it's #demon_castle-0 so I can just LOAD #demon_castle-0 followed by RUN. Alternatively - and this is just lovely - if I see the PNG pic of the cartridge on a web page, I can just save that PNG locally and save it in C:\Users\scott\AppData\Roaming\pico-8\carts then run it with LOAD demon_castle-0 (or I can include the full filename with extensions). THAT PNG ABOVE IS THE ACTUAL GAME AS WELL. What a clever thing - a true virtual cartridge. One of the many genius parts of the PICO-8 is that the "Cartridges" are actually PNG pictures of cartridges. Drink that in for a second. They save a screenshot of the game while the cart is running, then they hide the actual code in a steganographic process - they are hiding the code in two of the bits of the color channels! Since the cart pics are 160*205 there's enough room for 32k. A p8 file is source code and a p8.png is the compiled cart! How to make PICO-8 games The PICO-8 software includes everything you need - consciously constrained - to make AND play games. You hit ESC to move between the game and the game designer. It includes a sprite and music editor as well. From their site, the specifications are TIGHT on purpose because constraints are fun. When I write for the PalmPilot back in the 90s I had just 4k of heap and it was the most fun I've had in years. Display - 128x128 16 colours Cartridge Size - 32k Sound - 4 channel chip blerps Code - Lua Sprites - 256 8x8 sprites Map - 128x32 cels "The harsh limitations of PICO-8 are carefully chosen to be fun to work with, to encourage small but expressive designs, and to give cartridges made with PICO-8 their own particular look and feel." The code you will use is LUA. Here's some demo code of a Hello World that animates 11 sprites and includes two lines of textt = 0music(0) -- play music from pattern 0function _draw() cls() for i=1,11 do -- for each letter for j=0,7 do -- for each rainbow trail part t1 = t + i*4 - j*2 -- adjusted time y = 45-j + cos(t1/50)*5 -- vertical position pal(7, 14-j) -- remap colour from white spr(16+i, 8+i*8, y) -- draw letter sprite end end print("this is pico-8", 37, 70, 14) print("nice to meet you", 34, 80, 12) spr(1, 64-4, 90) -- draw heart sprite t += 1end That's just a simple example, there's a huge forum with thousands of games and lots of folks happy to help you in this new world of game creation with the PICO-8. Here's a wonderful PICO-8 Cheat Sheet to print out with a list of functions and concepts. Maybe set it as your wallpaper while developing? There's a detailed User Manual and a 72 page PICO-8 Zine PDF which is really impressive! And finally, be sure to bookmark this GitHub hosted amazing curated list of PICO-8 resources! https://github.com/pico-8/awesome-PICO-8   Writing PICO-8 Code in another Editor There is a 3 year old PICO-8 extension for Visual Studio Code that is a decent start, although it's created assuming a Mac, so if you are a Windows user, you will need to change the Keyboard Shortcuts to something like "Ctrl-Shift-Alt-R" to run cartridges. There's no debugger that I'm seeing. In an ideal world we'd use launch.json and have a registered PICO-8 type and that would make launching after changing code a lot clearer. There is a more recent "pico8vscodeditor" extension by Steve Robbins that includes snippets for loops and some snippets for the Pico-8 API. I recommend this newer fleshed out extension - kudos Steve! Be sure to include the full path to your PICO-8 executable, and note that the hotkey to run is a chord, starting with "Ctrl-8" then "R." Editing code directly in the PICO-8 application is totally possible and you can truly develop an entire cart in there, but if you do, you're a better person than I. Here's a directory listing in VSCode on the left and PICO-8 on the right. And some code. You can expert to HTML5 as well as binaries for Windows, Mac, and Linux. It's a full game maker! There are also other game systems out there like PicoLove that take PICO-8 in different directions and those are worth knowing about as well. What about a physical PICO-8 Console A number of folks have talked about the ultimate portable handheld PICO-8 device. I have done a lot of spelunking and as of this writing it doesn't exist. You could get a Raspberry Pi Zero and put this Waveshare LCD hat on top. The screen is perfect. But the joystick and buttons...just aren't. There's also no sound by default. But $14 is a good start. The Tiny GamePi15, also from Waveshare could be good with decent buttons but it has a 240x240 screen. The full sized Game Hat looks promising and has a large 480x320 screen so you could play PICO-8 at a scaled 256x256. The RetroStone is also close but you're truly on your own, compiling drivers yourself (twitter thread) from what I can gather The ClockworkPI GameShell is SOOOO close but the screen is 320x240 which makes 128x128 an awkward scaled mess with aliasing, and the screen the Clockwork folks chose doesn't have a true grid if pixels. Their pixels are staggered. Hopefully they'll offer an alternative module one day, then this would truly be the perfect device. There are clear instructions on how to get going. The PocketCHIP has a great screen but a nightmare input keyboard. For now, any PC, Laptop, or Rasberry Pi with a proper setup will do just fine for you to explore the PICO-8 and the world of fantasy consoles! Sponsor: OzCode is a magical debugging extension for C#/.NET devs working in Visual Studio. Get to the root cause of your bugs faster with heads-up display, advanced search inside objects, LINQ query debugging, side-by-side object comparisons & more. Try for free! © 2019 Scott Hanselman. All rights reserved.


Good, Better, Best - creating the ultimate remote worker webcam setup on a budget

I've been a remote worker and an occasional YouTuber for well over a decade. I'm always looking for a better setup because the goal is clear - how can I interact with you and my co-workers in a way that has high-enough fidelity that I don't need to drive to Seattle every week! I believe if my camera is clear and my audio is clear than I can really have a remote relationship with my team that is effective and true. Everyone has a webcam these days and can just get on a video call and have a chat - but is it of sufficient quality that you feel like you're really having a good conversation with folks and truly connecting! Here's a shot of my setup during a meeting I'm in here at Microsoft: Here's my thoughts on Good, Better, add Best set-ups for remotes and YouTubers without spending thousands. Good The Logitech C270 Webcam can be gotten for as little as $20 or less! It's wholly adequate with enough light. It only does 720p and it's USB2 so I can't enthusiastically recommend it but it's OK again, if you through light at it. In the dark is just a webcam. The Logitech USB Headset H570 is decent, as is the lovely Jabra UC Voice corded headset. I prefer the Jabra because it only covers one ear and doesn't give me the "two covered ears" claustrophobic feeling. To be clear - audio quality matters. Any crappy headset (or quality one as above) will ALWAYS be better than your webcam's default or your laptop's default. Always. Mics need to be closer to your mouth to sound good. Small webcam Ringlight. Light light light. Webcams, especially cheap ones NEED LIGHT. It feels weird and I get it but the quality is SO MUCH BETTER with some decent fill light. Get a ring light that's powered by USB and use it on calls. Yes, it looks ridiculous but it WORKS. Better How can we improve on the GOOD setup. Clearer videos and better sound/sound feel. Some folks feel the Logitech Brio is overhyped and I think that's fair. It's a "4k" camera that's not as impressive as it should be. That said, it's a solid camera and arguably the best Logitech has to offer. If I could suggest a middle of the road solid "BETTER" setup for a remote worker, I'd recommend these Logitech Brio - solid 1080p 30fps Logtitech USB Headset LED Light ring The lights are the magic. Now, moving beyond USB headsets, I love adding speakerphones - not for the mic, literally for the speaker. I love the Plantronics Portable USB Speakerphone. Requires no drivers, it just shows up as a mic and speaker automatically. I have it front and center in front of my monitor and I use it every day. It makes me feel like my Home Office is a real Office somehow. If conversations are private I'll use the headset above for the audio but when I want the sound to "come from the monitor" I'll SPLIT the audio. This is a pro tip. You can set up the Mic input as the headset mic and the Speaker output as a Speakerphone (or your main speakers). I like using the Speakerphone for voice and keeping the computer's output as the main speakers. Having this separate of voice and computer sounds is a small trick I play on myself but it helps to create a sense of location where the remote video person comes out of separate speakers. Best Let's spend a little bit of money, but not so much that we break the bank. I'm going to make my own webcam. Rather than a plastic of the shelf single webcam, let's take an actual mirrorless camera - the kind you'd take to a photography class - and make it a HIGH QUALITY webcam. We need a great camera and it needs to support HDMI out. The camera also needs to be able to stay on all day long, not overheat, and it needs to run on AC power (not on battery). Here's a list of cameras that have clean HDMI out and can stay on all day. You might have one of these cameras in your closet! I like the Sony A6000 and here's its characteristics. Sony A6000 - I found this on Craigslist for $300. Max resolution: 1080p and a buttery smooth 60fps Clean HDMI: Yes Unlimited runtime: Yes Connection type: Micro HDMI Power: Dummy Battery Verified by: Elgato Notes: Requires dummy battery for power (sold separately) Retains full autofocus with clean HDMI output I need a "dummy battery" for this camera. Turns out this is a whole class of thing you can buy. Who knew? This camera has micro-HDMI so I need a micro-HDMI to HDMI cable. Now this is just a loose camera, so how I will mount it on my monitor? I like mounting it INSIDE the Ring Light. If you don't want the light you can just get this clamp mount. Or you can do what I did - get the CLAMP then the LIGHT and then put the CAMERA in that like a sandwic This camera and cameras like it output HDMI and I need that HDMI to be inputted into my computer and I want the HDMI output of the camera to look like it's a regular Webcam. The magical device that does this for us is the Elgato CamLink 4k. It's literally a little stick with HDMI input on one end and a USB3 on the other side. It took 5 minutes to install. This device also has the added benefit of being a generic "capture card" if you want to record or broadcast your gaming consoles OR other computers! Here's a YouTube video I made that shows you these cameras, before and after - Good, Better, and BEST! What do you think? Thanks to John Miller and Jeff Fritz for their help and guidance! * I use Amazon referral links and donate the little money to my kids' school. You support charter schools when you use these links. Sponsor: OzCode is a magical debugging extension for C#/.NET devs working in Visual Studio. Get to the root cause of your bugs faster with heads-up display, advanced search inside objects, LINQ query debugging, side-by-side object comparisons & more. Try for free!© 2019 Scott Hanselman. All rights reserved.


Dotnet Depends is a great text mode development utility made with Gui.cs

I love me some text mode. ASCII, ANSI, VT100. Keep your 3D accelerated ray traced graphics and give me a lovely emoji-based progress bar. Miguel has a nice thing called Gui.cs and I bumped into it in an unexpected and lovely place. There are hundreds of great .NET Global Tools that you can install to make your development lifecycle smoother, and I was installing Martin Björkström's lovely "dotnet depends" tool (go give him a GitHub star now!)&nbsp; like this:dotnet tool install -g dotnet-depends Then I headed over to my Windows Terminal (get it free in the Store) and ran "dotnet depends" on my main website's code and was greeted by this (don't sweat the line spacing, that's a Terminal bug that'll be fixed soon): How nice is this! It's a fully featured dependency explorer but it's all in text mode and doesn't require me to use the mouse and take my hands of the keyboard. If I'm already deep into the terminal/text mode, this is a great example of a solid, useful tool. But how hard was it to make? Surprisingly little as his code is very simple. This is a testament to how he used the API and how Miguel designed it. He's separated the UI and the Business Logic, of course. He does the analysis work and stores it in a graph variable. Here they're setting up some panes for the (text mode) Windows:Application.Init();var top = new CustomWindow();var left = new FrameView("Dependencies"){ Width = Dim.Percent(50), Height = Dim.Fill(1)};var right = new View(){ X = Pos.Right(left), Width = Dim.Fill(), Height = Dim.Fill(1)}; It's split in half at this point, with the left side staying&nbsp; at 50%.var orderedDependencyList = graph.Nodes.OrderBy(x => x.Id).ToImmutableList();var dependenciesView = new ListView(orderedDependencyList){ CanFocus = true, AllowsMarking = false};left.Add(dependenciesView);var runtimeDependsView = new ListView(Array.Empty<Node>()){ CanFocus = true, AllowsMarking = false};runtimeDepends.Add(runtimeDependsView);var packageDependsView = new ListView(Array.Empty<Node>()){ CanFocus = true, AllowsMarking = false};packageDepends.Add(packageDependsView);var reverseDependsView = new ListView(Array.Empty<Node>()){ CanFocus = true, AllowsMarking = false};reverseDepends.Add(reverseDependsView);right.Add(runtimeDepends, packageDepends, reverseDepends);top.Add(left, right, helpText);Application.Top.Add(top) The right side gets three ListViews added to it and the left side gets the dependencies view. Top it off with some clean data binding to the views and an initial call to UpdateLists. Anytime the dependenciesView gets a SelectedChanged event we'll call UpdateLists again.top.Dependencies = orderedDependencyList;top.VisibleDependencies = orderedDependencyList;top.DependenciesView = dependenciesView;dependenciesView.SelectedItem = 0;UpdateLists();dependenciesView.SelectedChanged += UpdateLists;Application.Run(); What's in update lists? Filtering code for that graph variable from before.void UpdateLists(){ var selectedNode = top.VisibleDependencies[dependenciesView.SelectedItem]; runtimeDependsView.SetSource(graph.Edges.Where(x => x.Start.Equals(selectedNode) && x.End is AssemblyReferenceNode) .Select(x => x.End).ToImmutableList()); packageDependsView.SetSource(graph.Edges.Where(x => x.Start.Equals(selectedNode) && x.End is PackageReferenceNode) .Select(x => $"{x.End}{(string.IsNullOrEmpty(x.Label) ? string.Empty : " (Wanted: " + x.Label + ")")}").ToImmutableList()); reverseDependsView.SetSource(graph.Edges.Where(x => x.End.Equals(selectedNode)) .Select(x => $"{x.Start}{(string.IsNullOrEmpty(x.Label) ? string.Empty : " (Wanted: " + x.Label + ")")}").ToImmutableList());} That's basically it and it's fast as heck. Probably to be expected from the folks that brought you Midnight Commander. Are you working on any utilities or cool projects and might want to consider - gasp - text mode over a website? Sponsor: Looking for a tool for performance profiling, unit test coverage, and continuous testing that works cross-platform on Windows, macOS, and Linux? Check out the latest JetBrains Rider!© 2019 Scott Hanselman. All rights reserved.


Docker Desktop for WSL 2 integrates Windows 10 and Linux even closer

Being able to seamlessly run Linux on Windows is making a bunch of common development tasks easier. When you're running WSL2 (Windows Subsystem for Linux 2) in a version of Windows 10 greater than build 18945, a BUNCH of useful and interesting scenarios light up and stuff just works. Docker for Windows (download the Docker Desktop for WSL 2 Tech preview here) is great, but it has historically worked on Windows by creating a Hyper-V virtual machine called Moby that is visible within the Hyper-V client. It's a utility VM, but it's one you're aware of. However, if WSL2 runs a real Linux kernel in Windows 10 and it's managing a virtual machine platform underneath (and not visible to) Hyper-V client tools, then why not just let WSL2 handle containers for us? That's exactly what the Docker Desklop WSL 2 Tech Preview aims to do. And just like WSL 2, it's fast. ...the time required to start a Docker daemon after a cold start is significantly faster. It takes less than 2 seconds to start the Docker daemon when compared to tens of seconds in the current version of Docker Desktop. Once you've got a Linux (Ubuntu or the like) set up in WSL 2, you can right click on Docker Deskop and click "WSL 2 Tech Preview." This is a goofy and not-super-intuitive UI for now but it's a moment in time. Then you just hit Start. NOTE: If you've already installed Docker within WSL 2 at the command line, stop it and let Docker Desktop manage its lifecycle. Here's the beginnings of their UI. When I drop out to PowerShell/CMD on Windows I can run "docker context ls."C:\Users\Scott\Desktop> docker context ls NAME DESCRIPTION DOCKER ENDPOINT default Current DOCKER_HOST based configuration npipe:////./pipe/docker_enginewsl * Docker daemon hosted in WSL 2 npipe:////./pipe/docker_wsl You can see there's two contexts, and I've run "docker context use wsl" and that's now my default. Here is docker images from Ubuntu, and again from Windows (in PowerShell Core). They are the same! Sweet. Here I am using PowerShell Core (which is open source and cross-platform, natch) to manage my builds which are themselves cross-platform and I can run both a docker build or a metal build on both Windows or Linux, all seamlessly on the same box. Also note, Simon from Docker points out "We are using a non default dataroot in this mode to avoid corrupting a datastore you use without docker desktop in case something goes wrong. Stopping the docker desktop wsl daemon and restarting the one you installed manually should bring everything back." I noticed this because my "Windows Docker" and my original WSL2 docker had a list of images that I naively expected to be available here, but this is a new context and new dataroot so you may need to fetch images again in this new world if you're have been historically an active docker user. So far I'm super impressed. Linux on the Windows Desktop feels right. It's Peanut Butter and Chocolate. Sponsor: Looking for a tool for performance profiling, unit test coverage, and continuous testing that works cross-platform on Windows, macOS, and Linux? Check out the latest JetBrains Rider!© 2019 Scott Hanselman. All rights reserved.


Ruby on Rails on Windows is not just possible, it's fabulous using WSL2 and VS Code

I've been trying on and off to enjoy Ruby on Rails development on Windows for many years. I was doing Ruby on Windows as long as 13 years ago. There's been many valiant efforts to make Rails on Windows a good experience. However, given that Windows 10 can run Linux with WSL (Windows Subsystem for Linux) and now Windows runs Linux at near-native speeds with an actual shipping Linux Kernel using WSL2, Ruby on Rails folks using Windows should do their work in WSL2. Running Ruby on Rails on Windows Get a recent Windows 10 WSL2 will be released later this year but for now you can easily get it by signing up for Windows Insiders Fast and making sure your version of Windows is 18945 or greater. Just run "winver" to see your build number. Run Windows Update and get the latest. Enable WSL2 You'll want the newest Windows Subsystem for Linux. From a PowerShell admin prompt run this:Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux and head over to the Windows Store and search for "Linux" or get Ubuntu 18.04 LTS directly. Download it, run it, make your sudo user. Make sure your distro is running at max speed with WSL2. That earlier PowerShell prompt run wsl --list -v to see your distros and their WSL versions.C:\Users\Scott\Desktop> wsl --list -v NAME STATE VERSION* Ubuntu-18.04 Running 2 Ubuntu Stopped 1 WLinux Stopped 1 You can upgrade any WSL1 distro like this, and once it's done, it's done.wsl --set-version "Ubuntu-18.04" 2 And certainly feel free to get cool fonts and styles and make yourself a nice shiny Linux experience...maybe with the Windows Terminal. Get the Windows Terminal Bonus points, get the new open source Windows Terminal for a better experience at the command line. Install it AFTER you've set up Ubuntu or a Linux and it'll auto-populate its menu for you. Otherwise, edit your profiles.json and make a profile with a commandLine like this:"commandline" : "wsl.exe -d Ubuntu-18.04" See how I'm calling wsl -d (for distro) with the short name of the distro? Since I have a real Ubuntu environment on Windows I can just follow these instructions to set up Rails! Set up Ruby on Rails Ubuntu instructions work because it is Ubuntu! https://gorails.com/setup/ubuntu/18.04 Additionally, I can install as as many Linuxes as I want, even a Dev vs. Prod environment if I like. WSL2 is much lighter weight than a full Virtual Machine. Once Rails is set up, I'll try making a new hello world:rails new myapp and here's the result! I can also run "explorer.exe ." and launch Windows Explorer and see and manage my Linux files. That's allowed now in WSL2 because it's running a Plan9 server for file access. Install VS Code and the VS Code Remote Extension Pack I'm going to install the VSCode Remote Extension pack so I can develop from Windows on remote machines OR in WSL or&nbsp; Container directly. I can click the lower level corner of VS Code or check the Command Palette for this list of menu items. Here I can "Reopen Folder in WSL" and pick the distro I want to use. Now that I've opened the folder for development WSL look closely at the lower left corner. You can see I'm in a WSL development mode AND Visual Studio Code is recommending I install a Ruby VS Code extension...inside WSL! I don't even have Ruby and Rails on Windows. I'm going to have the Ruby language servers and VS Code headless parts live in WSL - in Linux - where they'll be the most useful. This synergy, this balance between Windows (which I enjoy) and Linux (whose command line I enjoy) has turned out to be super productive. I'm able to do all the work I want - Go, Rust, Python, .NET, Ruby - and move smoothly between environments. There's not a clear separation like there is with the "run it in a VM" solution. I can access my Windows files from /mnt/c from within Linux, and I can always get to my Linux files at \\wsl$ from within Windows. Note that I'm running rails server -b=0.0.0.0 to bind on all available IPs, and this makes Rails available to "localhost" so I can hit the Rails site from Windows! It's my machine, so it's my localhost (the networking complexities are handled by WSL2).$ rails server -b=0.0.0.0=> Booting Puma=> Rails 6.0.0.rc2 application starting in development=> Run `rails server --help` for more startup optionsPuma starting in single mode...* Version 3.12.1 (ruby 2.6.2-p47), codename: Llamas in Pajamas* Min threads: 5, max threads: 5* Environment: development* Listening on tcp://0.0.0.0:3000Use Ctrl-C to stop Here it is in new Edge (chromium). So this is Ruby on Rails running in WSL, as browsed to from Windows, using the new Edge with Chromium at its heart. Cats and dogs, living together, mass hysteria. Even better, I can install the ruby-debug-ide gem inside WSL and now I'm doing interactive debugging from VS Code, but again, note that the "work" is happening inside WSL. Enjoy! Sponsor: Get the latest JetBrains Rider with WinForms designer, Edit & Continue, and an IL (Intermediate Language) viewer. Preliminary C# 8.0 support, rename refactoring for F#-defined symbols across your entire solution, and Custom Themes are all included. © 2019 Scott Hanselman. All rights reserved.


System.Text.Json and new built-in JSON support in .NET Core

In a world where JSON (JavaScript Object Notation) is everywhere it's long been somewhat frustrating that .NET didn't have built-in JSON support. JSON.NET is great and has served us well but it's remained a 3rd party dependency for basic stuff like an ASP.NET web site or a simple console app. Back in 2018 plans were announced to move JSON into .NET Core 3.0 as an intrinsic supported feature, and while they're at it, get double the performance or more with Span<T> support and no memory allocations. ASP.NET in .NET Core 3.0 removes the JSON.NET dependency but still allows you to add it back in a single line if you'd like. NOTE: This is all automatic and built in with .NET Core 3.0, but if you’re targeting .NET Standard or .NET Framework. Install the System.Text.Json NuGet package (make sure to include previews and install version 4.6.0-preview6.19303.8 or higher). In order to get the integration with ASP.NET Core, you must target .NET Core 3.0. It's very clean as well. Here's a simple example.using System;using System.Text.Json;using System.Text.Json.Serialization;namespace verysmall{ class WeatherForecast { public DateTimeOffset Date { get; set; } public int TemperatureC { get; set; } public string Summary { get; set; } } class Program { static void Main(string[] args) { var w = new WeatherForecast() { Date = DateTime.Now, TemperatureC = 30, Summary = "Hot" }; Console.WriteLine(JsonSerializer.Serialize<WeatherForecast>(w)); } }} The default options result in minified JSON as well.{"Date":"2019-07-27T00:58:17.9478427-07:00","TemperatureC":30,"Summary":"Hot"} Of course, when you're returning JSON from a Controller in ASP.NET it's all automatic and with .NET Core 3.0 it'll automatically use the new System.Text.Json unless you override it. Here's an example where we pull out some fake Weather data (5 randomly created reports) and return the array. [HttpGet]public IEnumerable<WeatherForecast> Get(){ var rng = new Random(); return Enumerable.Range(1, 5).Select(index => new WeatherForecast { Date = DateTime.Now.AddDays(index), TemperatureC = rng.Next(-20, 55), Summary = Summaries[rng.Next(Summaries.Length)] }) .ToArray();} The application/json is used and JSON is returned by default. If the return type was just string, we'd get text/plain. Check out this YouTube video to learn more details about System.Text.Json works and how it was designed. I'm looking forward to working with it more! Sponsor: Get the latest JetBrains Rider with WinForms designer, Edit & Continue, and an IL (Intermediate Language) viewer. Preliminary C# 8.0 support, rename refactoring for F#-defined symbols across your entire solution, and Custom Themes are all included.© 2019 Scott Hanselman. All rights reserved.


Installing PowerShell with one line as a .NET Core global tool

I'm a huge fan of .NET Core global tools. I've done a podcast on Global Tools. Just like Node and other platform have globally tools that can be easily and quickly installed and then used in build scripts, CI/CD (Continuous Integration/Continuous Deployment) systems, or just general command line utilities, .NET Global Tools are easily made (by you!) and distributed via NuGet. Some cool examples (and there are hundreds) are the "Try .NET" Workshop runner and creator that can you can use to make interactive documentation, or coverlet for code coverage. There's a great and growing list of .NET Core Global Tools on GitHub. If you've got the .NET SDK installed you can try out a global tool just like this. dotnet tool install -g dotnetsay Then run this example with "dotnetsay," it's fun. stepping back a moment, you may be familiar with PowerShell. It's a scripting language and a command line shell like Bash or DOS or the Windows Command Prompt. You may think of PowerShell as a tool for maintaining and managing Windows Servers. However in recent years, PowerShell has gone cross platform and runs most anywhere. It's lightweight and has .NET Core at its, ahem, core. You can use PowerShell for scripting systems on any platform and if you're a .NET developer the team has made installing and immediately using PowerShell in scripts a one liner - which is genius. It's PowerShell as a .NET Global Tool. Here's an example output from my system running Ubuntu. I just "dotnet tool install --global PowerShell."$ dotnet --version2.1.502 $ dotnet tool install --global PowerShellYou can invoke the tool using the following command: pwshTool 'powershell' (version '6.2.2') was successfully installed.$ pwshPowerShell 6.1.1https://aka.ms/pscore6-docsType 'help' to get help. PS /mnt/c/Users/Scott/Desktop>exit Here I've checked that I have .NET 2.x or above, then I install PowerShell. I can run scripts or I can drop into the interactive shell. Note the PS prompt and my current directory above. In fact, PowerShell is so useful as a scripting language when combined with .NET Core that PowerShell has been included as a global tool within the .NET Core 3.0 Preview Docker images since Preview 4. This means you can use PowerShell lines/scripts inside Docker images.FROM mcr.microsoft.com/dotnet/core/sdk:3.0RUN pwsh -c Get-DateRUN pwsh -c "Get-Module -ListAvailable | Select-Object -Property Name, Path" Being able to easily install PowerShell as a global tool means you can count on it in your scripts, CI/CDs systems, or docker containers. It's also nice to be able to be able to use existing PowerShell scripts cross platform. I'm impressed with this idea - installing PowerShell itself as a .NET Global Tool. Very clever and useful. Sponsor: Ossum unifies agile planning, version control, and continuous integration into a smart platform that saves 3x the time and effort so your team can focus on building their next great product. Sign up free.© 2019 Scott Hanselman. All rights reserved.


DragonFruit and System.CommandLine is a new way to think about .NET Console apps

There's some interesting stuff quietly happening in the "Console App" world within open source .NET Core right now. Within the https://github.com/dotnet/command-line-api repository are three packages: System.CommandLine.Experimental System.CommandLine.DragonFruit System.CommandLine.Rendering These are interesting experiments and directions that are exploring how to make Console apps easier to write, more compelling, and more useful. The one I am the most infatuated with is DragonFruit. Historically Console apps in classic C look like this:#include <stdio.h> int main(int argc, char *argv[]){ printf("Hello, World!\n"); return 0;} That first argument argc is the count of the number of arguments you've passed in, and argv is an array of pointers to 'strings,' essentially. The actual parsing of the command line arguments and the semantic meaning of the args you've decided on are totally on you. C# has done it this way, since always.static void Main(string[] args){ Console.WriteLine("Hello World!");} It's a pretty straight conceptual port from C to C#, right? It's an array of strings. Argc is gone because you can just args.Length. If you want to make an app that does a bunch of different stuff, you've got a lot of string parsing before you get to DO the actual stuff you're app is supposed to do. In my experience, a simple console app with real proper command line arg validation can end up with half the code parsing crap and half doing stuff.myapp.com someCommand --param:value --verbose The larger question - one that DragonFruit tries to answer - is why doesn't .NET do the boring stuff for you in an easy and idiomatic way? From their docs, what if you could declare a strongly-typed Main method? This was the question that led to the creation of the experimental app model called "DragonFruit", which allows you to create an entry point with multiple parameters of various types and using default values, like this:static void Main(int intOption = 42, bool boolOption = false, FileInfo fileOption = null) { Console.WriteLine($"The value of intOption is: {intOption}"); Console.WriteLine($"The value of boolOption is: {boolOption}"); Console.WriteLine($"The value of fileOption is: {fileOption?.FullName ?? "null"}"); } In this concept, the Main method - the entry point - is an interface that can be used to infer options and apply defaults.using System;namespace DragonFruit{ class Program { /// <summary> /// DragonFruit simple example program /// </summary> /// <param name="verbose">Show verbose output</param> /// <param name="flavor">Which flavor to use</param> /// <param name="count">How many smoothies?</param> static int Main( bool verbose, string flavor = "chocolate", int count = 1) { if (verbose) { Console.WriteLine("Running in verbose mode"); } Console.WriteLine($"Creating {count} banana {(count == 1 ? "smoothie" : "smoothies")} with {flavor}"); return 0; } }} I can run it like this:> dotnet run --flavor Vanilla --count 3 Creating 3 banana smoothies with Vanilla The way DragonFruit does this is super clever. During the build process, DragonFruit changes this public strongly typed Main to a private (so it's not seen from the outside - .NET won't consider it an entry point. It's then replaced with a Main like this, but you'll never see it as it's in the compiled/generated artifact.public static async Task<int> Main(string[] args){ return await CommandLine.ExecuteAssemblyAsync(typeof(AutoGeneratedProgram).Assembly, args, "");} So DragonFruit has swapped your Main for its smarter Main and the magic happens! You'll even get free auto-generated help!DragonFruit: DragonFruit simple example programUsage: DragonFruit [options]Options: --verbose Show verbose output --flavor <flavor> Which flavor to use --count <count> How many smoothies? --version Display version information If you want less magic and more power, you can use the same APIs DragonFruit uses to make very sophisticated behaviors. Check out the Wiki and Repository for more and perhaps get involved in this open source project! I really like this idea and I'd love to see it taken further! Have you used DragonFruit on a project? Or are you using another command line argument parser? Sponsor: Ossum unifies agile planning, version control, and continuous integration into a smart platform that saves 3x the time and effort so your team can focus on building their next great product. Sign up free.© 2019 Scott Hanselman. All rights reserved.


Real World Cloud Migrations: Azure Front Door for global HTTP and path based load-balancing

As I've mentioned lately, I'm quietly moving my Website from a physical machine to a number of Cloud Services hosted in Azure. This is an attempt to not just modernize the system - no reason to change things just to change them - but to take advantage of a number of benefits that a straight web host sometimes doesn't have. I want to have multiple microsites (the main page, the podcast, the blog, etc) with regular backups, CI/CD pipeline (check in code, go straight to staging), production swaps, a global CDN for content, etc. I'm breaking a single machine into a series of small sites BUT I want to still maintain ALL my existing URLs (for good or bad) and the most important one is hanselman.com/blog/ that I now want to point to hanselmanblog.azurewebsites.net. That means that the Azure Front Door will be receiving all the traffic - it's the Front Door! - and then forward it on to the Azure Web App. That means: hanselman.com/blog/foo -> hanselmanblog.azurewebsites.net/foo hanselman.com/blog/bar -> hanselmanblog.azurewebsites.net/foo hanselman.com/blog/foo/bar/baz -> hanselmanblog.azurewebsites.net/foo/bar/baz There's a few things to consider when dealing with reverse proxies like this and I've written about that in detail in this article on Dealing with Application Base URLs and Razor link generation while hosting ASP.NET web apps behind Reverse Proxies. You can and should read in detail about Azure Front Door here. It's worth considering a few things. Front Door MAY be overkill for what I'm doing because I have a small, modest site. Right now I've got several backends, but they aren't yet globally distributed. If I had a system with lots of regions and lots of App Services all over the world AND a lot of static content, Front Door would be a perfect fit. Right now I have just a few App Services (Backends in this context) and I'm using Front Door primarily to manage the hanselman.com top level domain and manage traffic with URL routing. On the plus side, that might mean Azure Front Door was exactly what I needed, it was super easy to set up Front Door as there's a visual Front Door Designer. It was less than 20 minutes to get it all routed, and SSL certs too just a few hours more. You can see below that I associated staging.hanselman.com with two Backend Pools. This UI in the Azure Portal is (IMHO) far easier than the Azure Application Gateway. Additionally, Front Door is Global while App Gateway is Regional. If you were a massive global site, you might put Azure Front Door in ahem, front, and Azure App Gateway behind it, regionally. Again, a little overkill as my Pools are pools are pools of one, but it gives me room to grow. I could easily balance traffic globally in the future. CONFUSION: In the past with my little startup I've used Azure Traffic Manager to route traffic to several App Services hosted all over the global. When I heard of Front Door I was confused, but it seems like Traffic Manager is mostly global DNS load balancing for any network traffic, while Front Door is Layer 7 load balancing for HTTP traffic, and uses a variety of reasons to route traffic. Azure Front Door also can act as a CDN and cache all your content as well. There's lots of detail on Front Door's routing architecture details and traffic routing methods. Azure Front Door is definitely the most sophisticated and comprehensive system for fronting all my traffic. I'm still learning what's the right size app for it and I'm not sure a blog is the ideal example app. Here's how I set up /blog to hit one Backend Pool. I have it accepting both HTTP and HTTPS. Originally I had a few extra Front Door rules, one for HTTP, one for HTTPs, and I set the HTTP one to redirect to HTTPS. However, Front door charges 3 cents an hour for the each of the first 5 routing rules (then about a penny an hour for each after 5) but I don't (personally) think I should pay for what I consider "best practice" rules. That means, forcing HTTPS (an internet standard, these days) as well as URL canonicalization with a trailing slash after paths. That means /blog should 301 to /blog/ etc. These are simple prescriptive things that everyone should be doing. If I was putting a legacy app behind a Front Door, then this power and flexibility in path control would be a boon that I'd be happy to pay for. But in these cases I may be able to have that redirection work done lower down in the app itself and save money every month. I'll update this post if the pricing changes. After I set up Azure Front Door I noticed my staging blog was getting hit every few seconds, all day forever. I realized there are some health checks but since there's 80+ Azure Front Door locations and they are all checking the health of my app, it was adding up to a lot of traffic. For a large app, you need these health checks to make sure traffic fails over and you really know if you app is healthy. For my blog, less so. There's a few ways to tell Front Door to chill. First, I don't need Azure Front Door doing a GET requests on /. I can instead ask it to check something lighter weight. With ASP.NET 2.2 it's as easy as adding HealthChecks. It's much easier, less traffic, and you can make the health check as comprehensive as you want.app.UseHealthChecks("/healthcheck"); Next I turned the Interval WAY app so it wouldn't bug me every few seconds. These two small changes made a huge difference in my traffic as I didn't have so much extra "pinging." After setting up Azure Front Door, I also turned on Custom Domain HTTPs and pointing staging to it. It was very easy to set up and was included in the cost. I haven't decided if I want to set up Front Door's caching or not, but it might mean an easier, more central way than using a CDN manually and changing the URLs for my sites static content and images. In fact, the POP (Point of Presense) locations for Front Door are the same as those for Azure CDN. NOTE: I will have to at some point manage the Apex/Naked domain issue where hanselman.com and www.hanselman.com both resolve to my website. It seems this can be handled by either CNAME flattening or DNS chasing and I need to check with my DNS provider to see if this is supported. I suspect I can do it with an ALIAS record. Barring that, Azure also offers a Azure DNS hosting service. There is another option I haven't explored yet called Azure Application Gateway that I may test out and see if it's cheaper for what I need. I primarily need SSL cert management and URL routing. I'm continuing to explore as I build out this migration plan. Let me know your thoughts in the comments. Sponsor: Develop Xamarin applications without difficulty with the latest JetBrains Rider: Xcode integration, JetBrains Xamarin SDK, and manage the required SDKs for Android development, all right from the IDE. Get it today© 2019 Scott Hanselman. All rights reserved.


Dealing with Application Base URLs and Razor link generation while hosting ASP.NET web apps behind Reverse Proxies

I'm quietly moving my Website from a physical machine to a number of Cloud Services hosted in Azure. This is an attempt to not just modernize the system - no reason to change things just to change them - but to take advantage of a number of benefits that a straight web host sometimes doesn't have. I want to have multiple microsites (the main page, the podcast, the blog, etc) with regular backups, CI/CD pipeline (check in code, go straight to staging), production swaps, a global CDN for content, etc. I'm also moving from an ASP.NET 4 (was ASP.NET 2 until recently) site to ASP.NET Core 2.x LTS and changing my URL structure. I am aiming to save money but I'm not doing this as a "spend basically nothing" project. Yes, I could convert my site to a static HTML generated blog using any number of great static site generators, or even a Headless CMS. Yes I could host it in Azure Storage fronted by a CMS, or even as a series of Azure Functions. But I have 17 years of content in DasBlog, I like DasBlog, and it's being actively updated to .NET Core and it's a fun app. I also have custom Razor sites in the form of my podcast site and they work great with a great workflow. I want to find a balance of cost effectiveness, features, ease of use, and reliability.&nbsp; What I have now is a sinking feeling like my site is gonna die tomorrow and I'm not ready to deal with it. So, there you go. Currently my sites live on a real machine with real folders and it's fronted by IIS on a Windows Server. There's an app (an IIS Application, to be clear) leaving at \ so that means hanselman.com/ hits / which is likely c:\inetpub\wwwroot full stop. For historical reasons, when you hit hanselman.com/blog/ you're hitting the /blog IIS Application which could be at d:\whatever but may be at c:\inetpub\wwwroot\blog or even at c:\blog. Who knows. The Application and ASP.NET within it knows that the site is at hanselman.com/blog. That's important, since I may write a URL like ~/about when writing code. If I'm in the hanselman.com/blog app, then ~/about means hanselman.com/blog/about. If I write /about, that means hanselman.com/about. So the ~ is a shorthand for "starting at this App's base URL." This is great and useful and makes Link generation super easy, but it only works if your app knows what it's server-side base URL is. To be clear, we are talking about the reality of the generated URL that's sent to and from the browser, not about any physical reality on the disk or server or app. I've moved my world to three Azure App Services called hanselminutes, hanselman, and hanselmanblog. They have names like http://hanselman.azurewebsites.net for example. ASIDE: You'll note that hitting hanselman.azurewebsites.com will hit an app that looks stopped. I don't want that site to serve traffic from there, I want it to be served from http://hanselman.com, right? Specifically only from Azure Front Door which I'll talk about in another post soon. So I'll use the Access Restrictions and Software Based Networking in Azure to deny all traffic to that site, except traffic from Azure - in this case, from the Azure Front Door Reverse Proxy I'll be using. That looks like this in this Access Restrictions part of the Azure Portal. Since the hanselman.com app will point to hanselman.azurewebsites.net (or one of its staging slots) there's no issue with URL generation. If I say / I mean /, the root of the site. If I generate a URL like "~/about" I'll get hanselman.com/about, right? But with http://hanselmanblog.azurewebsites.net it's different. I want hanselman.com/blog/ to point to hanselmanblog.azurewebsites.net. That means that the Azure Front Door will be receiving traffic, then forward it on to the Azure Web App. That means: hanselman.com/blog/foo -> hanselmanblog.azurewebsites.net/foo hanselman.com/blog/bar -> hanselmanblog.azurewebsites.net/foo hanselman.com/blog/foo/bar/baz -> hanselmanblog.azurewebsites.net/foo/bar/baz There's a few things to consider when dealing with reverse proxies like this. Is part of the /path being removed or is a path being added? In the case of DasBlog, we have a configuration setting so that the app knows where it LOOKS like it is, from the Browser URL's perspective. My blog is at /blog so I add that in some middleware in my Startup.cs. Certainly YOU don't need to have this in config - do whatever works for you as long as context.Request.PathBase is set as the app should see it. I set this very early in my pipeline. That if statement is there because most folks don't install their blog at /blog, so it doesn't add the middleware.//if you've configured it at /blog or /whatever, set that pathbase so ~ will generate correctlyUri rootUri = new Uri(dasBlogSettings.SiteConfiguration.Root);string path = rootUri.AbsolutePath;//Deal with path base and proxies that change the request pathif (path != "/"){ app.Use((context, next) => { context.Request.PathBase = new PathString(path); return next.Invoke(); });} Sometimes you want the OPPOSITE of this. That would mean that I wanted, perhaps hanselman.com to point to hanselman.azurewebsites.net/blog/. In that case I'd do this in my Startup.cs's ConfigureServices:app.UsePathBase("/blog"); Be aware that If you're hosting ASP.NET Core apps behind Nginx or Apache or really anything, you'll also want ASP.NET Core to respect&nbsp; X-Forwarded-For and other X-Forwarded standard headers. You'll also likely want the app to refuse to speak to anyone who isn't a certain list of proxies or configured URLs. I configure these in Startup.cs's ConfigureServices from a semicolon delimited list in my config, but you can do this in a number of ways.services.Configure<ForwardedHeadersOptions>(options =>{ options.ForwardedHeaders = ForwardedHeaders.All; options.AllowedHosts = Configuration.GetValue<string>("AllowedHosts")?.Split(';').ToList<string>();}); Since Azure Front Door adds these headers as it forwards traffic, from my app's point of view it "just works" once I've added that above and then this in Configure()app.UseForwardedHeaders(); There seems to be some confusion on hosting behind a reverse proxy in a few GitHub Issues. I'd like to see my scenario ( /foo -> / ) be a single line of code, as we see that the other scenario ( / -> /foo ) is a single line. Have you had any issues with URL generation when hosting your Apps behind a reverse proxy? Sponsor: Develop Xamarin applications without difficulty with the latest JetBrains Rider: Xcode integration, JetBrains Xamarin SDK, and manage the required SDKs for Android development, all right from the IDE. Get it today© 2019 Scott Hanselman. All rights reserved.


Making a tiny .NET Core 3.0 entirely self-contained single executable

I've always been fascinated by making apps as small as possible, especially in the .NET space. No need to ship any files - or methods - that you don't need, right? I've blogged about optimizations you can make in your Dockerfiles to make your .NET containerized apps small, as well as using the ILLInk.Tasks linker from Mono to "tree trim" your apps to be as small as they can be. Work is on going, but with .NET Core 3.0 preview 6, ILLink.Tasks is no longer supported and instead the Tree Trimming feature is built into .NET Core directly. Here is a .NET Core 3.0 Hello World app. Now I'll open the csproj and add PublishTrimmed = true.<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp3.0</TargetFramework> <PublishTrimmed>true</PublishTrimmed> </PropertyGroup></Project> And I will compile and publish it for Win-x64, my chosen target.dotnet publish -r win-x64 -c release Now it's just 64 files and 28 megs! If your app uses reflection you can let the Tree Trimmer know by telling the project system about your Assembly, or even specific Types or Methods you don't want trimmed away.<ItemGroup> <TrimmerRootAssembly Include="System.IO.FileSystem" /></ItemGroup> The intent in the future is to have .NET be able to create a single small executable that includes everything you need. In my case I'd get "supersmallapp.exe" with no dependencies. Until then, there's a cool global utility called Warp. This utility, combined with the .NET Core 3.0 SDK's now-built-in Tree Trimmer creates a 13 meg single executable that includes everything it needs to run.C:\Users\scott\Desktop\SuperSmallApp>dotnet warpRunning Publish...Running Pack...Saved binary to "SuperSmallApp.exe" And the result is just a 13 meg single EXE ready to go on Windows. If you want, you can combine this "PublishedTrimmed" object with "PublishReadyToRun" as well and get a small AND fast app.<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp3.0</TargetFramework> <PublishTrimmed>true</PublishTrimmed> <PublishReadyToRun>true</PublishReadyToRun> </PropertyGroup></Project> These are not just IL (Intermediate Language) assemblies that are JITted (Just in time compiled) on the target machine. These are more "pre-chewed" AOT (Ahead of Time) compiled assemblies with as much native code as possible to speed up your app's startup time. From the blog post: In terms of compatibility, ReadyToRun images are similar to IL assemblies, with some key differences. IL assemblies contain just IL code. They can run on any runtime that supports the given target framework for that assembly. For example a netstandard2.0 assembly can run on .NET Framework 4.6+ and .NET Core 2.0+, on any supported operating system (Windows, macOS, Linux) and architecture (Intel, ARM, 32-bit, 64-bit). R2R assemblies contain IL and native code. They are compiled for a specific minimum .NET Core runtime version and runtime environment (RID). For example, a netstandard2.0 assembly might be R2R compiled for .NET Core 3.0 and Linux x64. It will only be usable in that or a compatible configuration (like .NET Core 3.1 or .NET Core 5.0, on Linux x64), because it contains native code that is only usable in that runtime environment. I'll keep exploring .NET Core 3.0, and you can install the SDK here in minutes. It won't mess up any of your existing stuff. Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today! © 2018 Scott Hanselman. All rights reserved.


Visual Studio Code Remote Development over SSH to a Raspberry Pi is butter

There's been a lot of folks, myself included, who have tried to install VS Code on the Raspberry Pi. In fact, there's a lovely process for this now. However, we have to ask ourselves is a Raspberry Pi really powerful enough to be running a full development environment and the app being debugged? Perhaps, but maybe this is a job for remote debugging. That means installing Visual Studio Code locally on my Windows or Mac machine, then having Visual Studio code install its headless server component (for ARM7) on the Pi. In January I blogged about Remote Debugging with VS Code on a Raspberry Pi using .NET Core on ARM. It was, and is, a little hacked together with SSH and wishes. Let's set up a proper VS Code Remote environment so I can be productive on a Pi while still enjoying my main laptop's abilities. First, can you ssh into your Raspberry Pi without a password prompt? If not, be sure to set that up with OpenSSH, which is now installed on Windows 10 by default. You know you've got it down when you can "ssh pi@mypi" and it just drops you into a remote prompt. Next, get Visual Studio Code Insiders plus Remote Development Extension Uninstall the "Remote - SSH" Extensions, disabling them isn't enough because you want to replace them with... Important - Remote - SSH Nightly Builds From within VS Code Insiders, hit Ctrl/CMD+P and type "Remote-SSH" for some of the choices. I can connect to Host and VS Code will SSH into the PI and install the VS Code server components in ~./vscode-server-insiders and then connect to them. It will take a minute as its downloading a 25 meg GZip and unzipping it into this temp folder. You'll know you're connected when you see this green badge as seen below that says "SSH: hostname." Then when you go "File | Open Folder" from the main menu, you'll get the remote system's files! You are working and editing locally on remote files. Note here that some of the extensions are NOT installed locally! The Python language services (using Jedi) are running remotely on the Raspberry Pi, so when I get intellisense, I'm getting it remoted from the actual machine I'm developing on, not a guess from my local box. When I open a Terminal with Ctrl+~, see that I'm automatically getting a remote terminal and I've even running htop in it! Check this out, I'm doing a remote interactive debugging session against CrowPi samples running on the Raspberry Pi (in Python 2) remotely from VS Code on my Windows 10 machine! I did need to make one change to the remote settings as it was defaulting to Python3 and I wanted to use Python2 for these samples. This has been a very smooth process and I remain super impressed with the VS Remote Development experience. I'll be looking at containers, and remote WSL debugging soon as well. Next step is to try C#, remotely, which will mean making sure the C# OmniSharp Extension Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!© 2018 Scott Hanselman. All rights reserved.


This changes everything for the DIY Diabetes Community - TidePool partners with Medtronic and Dexcom

I don’t speak in hyperbole very often, and I want to make sure that you all understand what a big deal this is for the diabetes DIY community. Everything that we’ve worked for for the last 20 years, it all changes now. #WeAreNotWaiting "You probably didn’t see this coming, [Tidepool] announced an agreement to partner with our friends at Medtronic Diabetes&nbsp;to support a future Bluetooth-enabled MiniMed pump with Tidepool Loop. Read more here: https://www.tidepool.org/blog/tidepool-loop-medtronic-collaboration" Translation? This means that diabetics will be able to choose their own supported equipment and build their own supported FDA Approved Closed Loop Artificial Pancreases. Open Source Artificial Pancreases will become the new standard of care for Diabetes in 2019 Every diabetic engineer every, the day after they were diagnosed, tries to solve their (or their loved one's) diabetes with open software and open hardware. Every one. I did it in the early 90s. Someone diagnosed today will do this tomorrow. Every time. I tried to send my blood sugar to the cloud from a PalmPilot. Every person diagnosed with diabetes ever, does this. Has done this. We try to make our own systems. Then @NightscoutProj happened and #WeAreNotWaiting happened and we shared code and now we sit on the shoulders of people who GAVE THEIR IDEAS TO USE FOR FREE. Here's the first insulin pump. Imagine a disease this miserable that you'd choose this. Type 1 Diabetes IS NOT FUN. Now we have Bluetooth and Wifi and the Cloud but I still have an insulin pump I bought off of Craigslist. Imagine a watch that gives you an electrical shock so you can check your blood sugar. We are all just giant bags of meat and water under pressure and poking the meatbag 10 times a day with needles and #diabetes testing strips SUUUUCKS. The work of early #diabetes pioneers is being now leveraged by @Tidepool_org to encourage large diabetes hardware and sensor manufacturers to - wait for it - INTEROPERATE on standards we can talk to. Just hours after I got off stage speaking on this very topic at @RefactrTech, it turns out that @howardlook and the wonderful friends at @Tidepool_org like @kdisimone and @ps2 and pioneer @bewestisdoing and others announced there are now partnerships with MULTIPLE insulin pump manufacturers AND multiple sensors! We the DIY #diabetes community declared #WeAreNotWaiting and, dammit, we'd do this ourselves. And now TidePool expressing the intent to put an Artificial Pancreas in the damn App Store - along with Angry Birds - WITH SUPPORT FOR WARRANTIED NEW BLE PUMPS. I could cry. You see this #diabetes insulin pump? It’s mine. See those cracks? THOSE ARE CRACKS IN MY INSULIN PUMP. This pump does not have a warranty, but it’s the only one that I have if I want an open source artificial pancreas. Now I’m going to have real choices, multiple manufacturers. It absolutely cannot be overstated how many people keep this community alive, from early python libraries that talked to insulin pumps, to man in the middle attacks to gain access to our own data, to custom hardware boards created to bridge the new and the old. To the known in the unknown, the song in the unsung, we in the Diabetes Community appreciate you all. We are standing on the shoulders of giants - I want to continue to encourage open software and open hardware whenever possible. Get involved.&nbsp; Also, if you're diabetic, consider buying a Nightscout Xbox Avatar accessory so you can see yourself represented while you game! Oh, and one other thing, journalists who cover the Diabetes DIY community, please let us read your articles before you write them. They all have mistakes and over-generalizations and inaccuracies and it's awkward to read them. That is all. Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well. © 2018 Scott Hanselman. All rights reserved.


Clever little C# and ASP.NET Core features that make me happy

I recently needed to refactor my podcast site which is written in ASP.NET Core 2.2 and running in Azure. The Simplecast backed API changed in a few major ways from their v1 to a new redesigned v2, so there was a big backend change and that was a chance to tighten up the whole site. As I was refactoring I made a few small notes of things that I liked about the site. A few were C# features that I'd forgotten about! C# is on version 8 but there were little happinesses in 6.0 and 7.0 that I hadn't incorporated into my own idiomatic view of the language. This post is collecting a few things for myself, and you, if you like. I've got a mapping between two collections of objects. There's a list of all Sponsors, ever. Then there's a mapping of shows where a show might have n sponsors. Out Var I have to "TryGetValue" because I can't be sure if there's a value for a show's ID. I wish there was a more compact way to do this (a language shortcut for TryGetValue, but that's another post).Shows2Sponsor map = null;shows2Sponsors.TryGetValue(showId, out map); if (map != null) { var retVal = sponsors.Where(o => map.Sponsors.Contains(o.Id)).ToList(); return retVal; } return null; I forgot that in C# 7.0 they added "out var" parameters, so I don't need to declare the map or its type. Tighten it up a little and I've got this. The LINQ query there returns a List of sponsor details from the main list, using the IDs returned from the TryGetValue.if (shows2Sponsors.TryGetValue(showId, out var map)) return sponsors.Where(o => map.Sponsors.Contains(o.Id)).ToList(); return null; Type aliases I found myself building JSON types in C# that were using the "Newtonsoft.Json.JsonPropertyAttribute" but the name is too long. So I can do this:using J = Newtonsoft.Json.JsonPropertyAttribute; Which means I can do this:[J("description")] public string Description { get; set; }[J("long_description")] public string LongDescription { get; set; } LazyCache I blogged about LazyCache before, and its challenges but I'm loving it. Here I have a GetShows() method that returns a List of Shows. It checks a cache first, and if it's empty, then it will call the Func that returns a List of Shows, and that Func is the thing that does the work of populating the cache. The cache lasts for about 8 hours. Works great. public async Task<List<Show>> GetShows(){ Func<Task<List<Show>>> showObjectFactory = () => PopulateShowsCache(); return await _cache.GetOrAddAsync("shows", showObjectFactory, DateTimeOffset.Now.AddHours(8));}private async Task<List<Show>> PopulateShowsCache(){ List<Show> shows = shows = await _simpleCastClient.GetShows(); _logger.LogInformation($"Loaded {shows.Count} shows"); return shows.Where(c => c.Published == true && c.PublishedAt < DateTime.UtcNow).ToList();} What are some little things you're enjoying? Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.© 2018 Scott Hanselman. All rights reserved.


What's better than ILDasm? ILSpy and dnSpy are tools to Decompile .NET Code

.NET code (C#, VB, F#, etc) compiles (for the most part) into Intermediate Language (IL) and then makes it way to native code usually by Just-in-time (JIT) compilation on the target machine. When you get a DLL/Assembly, it's pre-chewed but not full juiced, to mix my metaphors. Often you'll come along a DLL that you want to learn more about. Sometimes you'll want to just see the structure of classes, methods, etc, and other times you want to see the IL - or a close representation of the original C#/VB/F#, etc. You're not looking at the source, you're seeing a backwards projection of the IL as whatever language you want. You're basically taking this pre-chewed food and taking it out of your mouth and getting a decent idea of what it was originally. I've used ILDasm for years, but it's old and lame and people tease you for using it because they are cruel. ;) Seriously, though, I use ILDasm - the IL Disassembler - simply because it's already installed. Those tweets got me thinking though that I need to update my options, so I'm trying out ILSpy and dnSpy. ILSpy ILSpy has been around for a while and has multiple front-ends, including ones for Linux/Mac/Windows based on Avalonia in the form of AvaloniaSpy. You can also integrate ILSpy into Visual Studio 2017 or 2019 with this extension. There is also a console decompiler and, interestingly, cross-platform PowerShell cmdlets. I've always liked the "Open List" feature of ILSpy where you can open a preconfigured list of assemblies you want to browse, like ASP.NET MVC, .NET 4, etc. A fun open source contribution for you might be to update the included lists with newer defaults. There's so many folks doing great work in open source out there, why not jump in and help them out? dnSpy dnSpy has a lovely UI AND a great Console app using the same engine. It's amazingly polished and VERY complete. I was surprised that it also has a full hex editor as well as property pages for common EXE file headers. From their GitHub, dnSpy features Debug .NET Framework, .NET Core and Unity game assemblies, no source code required Edit assemblies in C# or Visual Basic or IL, and edit all metadata Light and dark themes Extensible, write your own extension High DPI support (per-monitor DPI aware) dnSpy takes it to the next level with an integrated Debugger, meaning you can attach to a running process and debug it without source code - but it feels like source code because it's decompiling for you. Note where it says C#, I can choose C#, VB, or IL as a "view" on my decompiled code. Here is dnSpy actually debugging ILSpy and stopped at a decompiled breakpoint. There's a lot of great low-level stuff in this space. Another cool tool is Reflexil, a .NET Assembly Editor as well as de4dot by the same mysterious author as dnSpy. Commercial Tools include Reflector and JustDecompile. What's your favorite? Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well. © 2018 Scott Hanselman. All rights reserved.


Bringing the SpaceOrb game controller forward with an Arduino Bridge via The Orbotron 9001

Almost ten years ago I posted abut the SpaceTec SpaceOrb 360 Controller and that was 15 years after it came out. We are now 25 years into the legend of the SpaceOrb and I will continue to tell the tale. The SpaceOrb is one of a series of innovative "Spacemice" that offer more than just two degress of input freedom. In fact, they offer SIX. "The puck or ball of a spacemouse can be moved along X, Y and Z axis as well as being twisted rotationally on each of those axis. (Roll, Pitch and Yaw)" Vic Putz continues to carry a torch for the SpaceOrb, as do I, except he's actually doing something about it. A decade ago I bought an Arduino and an "OrbSheild" from Vic that sat on top and provided a realtime bridge between the RS-232 Serial Port and the modern USB "HID" (Hardware Input Devices) that are used today. The goal is to move behind unsigned device drivers and create a system-agnostic solution that would present an old device in a new driver-free way. Vic has been working on a new version called the Orbotron 9000/9001 for the last few years and it's currently sold out at his little store. It acts as an interface for the SpaceOrb 360 and comes configured for that device, but should also work with the SpaceBall 5000, SpaceBall 4000FLX, and Magellan SpaceMouse. Code and plans on are GitHub, natch. When you plug the SpaceOrb into the Orbotron 9001 then into your PC it shows up as a Game Controller! There's several innovative "six degrees of freedom" games out there , like the "Overload" sequel to Decent on Steam, as well as Retrovirus, and NeonXSZ, as well as open source reimplementations of Descent like DXX Rebirth (give them some love!) and Forsaken. Modern Xinput games are trickier, but you may have success with https://www.x360ce.com by mapping the orb buttons and axes to a gamepad. I'm still exploring this space, but I love that The Internet - with the help of the enterprising and patient - refused to let the good parts of history die, by making innovative and clean bridges between the past and the future. Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.© 2018 Scott Hanselman. All rights reserved.


Piper Command Center BETA - Build a game controller from scratch with Arduino

Back in 2018 I posted my annual Christmas List of STEM Toys and the Piper Computer Kit 2 was on the list. My kids love this little wooden "laptop" comprised of a Raspberry Pi and an LCD screen. You spend time going through curated episodes of custom content and build and wire the computer LIVE while it's on! The Piper folks saw my post and asked me to take a look at the BETA of their Piper Command Center, so my sons and I jumped at the chance. They are actively looking for feedback. It's a chance to build our own game controller! The Piper Command Center BETA already has a ton of online content and things to try. Their "firmware" is an Arduino sketch and it's all up on GitHub. You'll want to get the Arduino IDE from the Windows Store. Today the Command Center can look like a Keyboard or a Mouse. In Mouse Mode (default), the joystick controls cursor movement and the left and right buttons mimic left and right mouse clicks. In Keyboard Mode, the joystick mimics the arrow keys on a keyboard, and the buttons mimic Space Bar (Up), Z (Left), X (Down), and C (Right) keys on a keyboard. Once it's built you can use the controller to play games in your browser, or soon, with new content on the Piper itself, which runs Minecraft usually. However, you DO NOT need the Piper to get the Piper Command Center. They are separate but complementary devices. Assemble a real working game controller, understand the basics of an Arduino, and discover physical computing by configuring a joystick, buttons, and more. Ideal for ages 13+. My son is looking at how he can modify the "firmware" on the Command Center to allow him to play emulators in the browser. &nbsp; The Piper Command Center comes unassembled, of course, and you get to put it together with a cool blueprint instruction sheet. We had some fun with the wiring and a were off by one a few times, but they've got a troubleshooting video that helped us through it. It's a nice little bit of kit and I love that it's made of wood. I'd like to see one with a second joystick that could literally emulate an XInput control pad, although that might be more complex than just emulating a mouse or keyboard. Go check it out. We're happy with it and we're looking forward to whatever direction it goes. The original Piper has updated itself many times in a few years we've had it, and we upgraded it to a 16gig SD Card to support the latest content and OS update. Piper Command Center is in BETA and will be updated and actively developed as they explore this space and what they can do with the device. As of the time of this writing there were five sketches for this controller. Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.© 2018 Scott Hanselman. All rights reserved.


Visual Studio Code Remote Development may change everything

OK, that's a little clickbaity but it's surely impressed the heck out of me. You can read more about VS Code Remote Development (at the time of this writing, available in the VS Code Insiders builds) but here's a little on my first experience with it. The Remote Development extensions require Visual Studio Code Insiders. Visual Studio Code Remote Development allows you to use a container, remote machine, or the Windows Subsystem for Linux (WSL) as a full-featured development environment. It effectively splits VS Code in half and runs the client part on your machine and the "VS Code Server" basically anywhere else. The Remote Development extension pack includes three extensions. See the following articles to get started with each of them: Remote - SSH - Connect to any location by opening folders on a remote machine/VM using SSH. Remote - Containers - Work with a sandboxed toolchain or container-based application inside (or mounted into) a container. Remote - WSL - Get a Linux-powered development experience in the Windows Subsystem for Linux. Lemme give a concrete example. Let's say I want to do some work in any of these languages, except I don't have ANY of these languages/SDKS/tools on my machine. Aside: You might, at this point, have already decided that I'm overreacting and this post is nonsense. Here's the thing though when it comes to remote development. Hang in there. On the Windows side, lots of folks creating Windows VMs in someone's cloud and then they RDP (Remote Desktop) into that machine and push pixels around, letting the VM do all the work while you remote the screen. On the Linux side, lots of folks create Linux VMs or containers and then SSH into them with their favorite terminal, run vim and tmux or whatever, and then they push text around, letting the VM do all the work while you remote the screen. In both these scenarios you're not really client/server, you're terminal/server or thin client/server. VS Code is a thick client with clean, clear interfaces to language services that have location transparency. I type some code, maybe an object instance, then intellisense is invoked with a press of "." - who does that work? Where does that list come from? If you're running code locally AND in the container, then you need to make sure both sides are in sync, same SDKs, etc. It's challenging. OK, I don't have the Rust language or toolkit on my machine. I'll clone this repository:git clone https://github.com/Microsoft/vscode-remote-try-rust Then I'll run Code, the Insiders version:C:\github> git clone https://github.com/Microsoft/vscode-remote-try-rust Cloning into 'vscode-remote-try-rust'... Unpacking objects: 100% (38/38), done. C:\github> cd .\vscode-remote-try-rust\C:\github\vscode-remote-try-rust [main =]> code-insiders . Then VS Code says, hey, this is a Dev Container, want me to open it? There's a devcontainer.json file that has a list of extensions that the project needs. And it will install those VS Extensions inside a Development Docker Container and then access them remotely. This isn't a list of extensions that your LOCAL system needs - you don't want to sully your system with 100 extensions. You want to have just those extensions that you need for the project you're working on it. Compartmentalization. You could do development and never install anything on your local machine, but you're finding a sweet spot that doesn't involved pushing text or pixels around. Now look at this screenshot and absorb. It's setting up a dockerfile, sure, with the development tools you want to use and then it runs docker exec and brings in the VS Code Server! Check out the Extensions section of VS Code, and check out the lower left corner. That green status bar shows that we're in a client/server situation. The extensions specific to Rust are installed in the Dev Container and we are using them from VS Code. When I'm typing and working on my code in this way (by the way it took just minutes to get started) I've got a full experience with Intellisense, Debugging, etc. Here I am doing a live debug session of a Rust app with zero setup other than VS Code Insiders, the Remote Extensions, and Docker (which I already had). As I mentioned, you can run within WSL, Containers, or over SSH. It's early days but it's extraordinarily clean. I'm really looking forward to seeing how far and effortless this style of development can go. There's so much less yak shaving! It effectively removes the whole setup part of your coding experience and you get right to it. Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.© 2018 Scott Hanselman. All rights reserved.


Using the Steam Link app to stream PC Games directly to your iPhone or mobile device

I think that we, as an industry, are still figuring game streaming out. It's challenging to find that sweet spot between quality and frames per second, all while respecting the speed of light and the laws of physics. That said, if you have a a rock solid 5Ghz wireless network, or better yet, a solid wired network, you can do some pretty cool stuff today. How to stream PC games from Windows 10 to your Xbox One for free You can use the Xbox app on Windows 10 to stream from your Xbox One to your PC. I use this to play on my Xbox while I walk on my treadmill in my garage. Works great even on my comparatively underpowered Surface Pro 3. You can also do the opposite if you have a powerful PC. You can run the Xbox Wireless Display app and remote your PC to your Xbox. Here I am running Batman on my PC with an NVidia 1080, from my Xbox I also have a Steam Link - it's odd to me that they discontinued this great little device - that I use to stream from my PC to my big TV. However, if you have a Raspberry Pi 3 or 3B+ running Stretch, you can try a beta of Steam Link and effectively make your own little Steam Link dedicated device. Bonus points if you 3D Print a replica case to make it look like a Steam Link.sudo apt updatesudo apt install steamlinksteamlink Today, however, Steam Link was released (after a rejection) to the Apple iOS App store so I had to try this out from my iPhone XS Max. I also have a Steam Controller, which, while weird (i.e. it's not an Xbox Controller) is the most configurable controller ever and it can emulate a mouse pretty well when needed. They released a new Firmware for the Steam Controller that enabled BLE support which allows it to be used as an MFi controller on an iOS device. You do need to memorize or write down the incantations to switch between original RF mode and BLE mode, though. Aside: MFi is almost criminally neglected and a Apple has utterly dropped the ball and missed an opportunity to REALLY make iOS devices more than casual gaming devices. Only in the last few years have decent MFi Controllers been released and game support is still embarrassingly spotty. I've used my now-discontinued SteelSeries Stratus a handful of times. You install the app, pair your controller with your iOS device/phone/tablet, then test your network. I'm using an Amplifi Mesh Network so I can control how my devices connect to the network, I can manage band selection, as well as Quality of Service (QoS) so I didn't have any trouble getting 55 Mb/s from my wired computer to my wireless iPhone. The quality is up and down as it appears they are focused on maintaining a high framerate. Here's a captured local video of me playing Batman from my high end rig streaming to Steam Link on my iPhone. Here’s a better quality video with the iPhone at full power and connect to 5ghz using Steam Link pic.twitter.com/N2UZ0P2G4n— Scott Hanselman (@shanselman) May 18, 2019 What has been YOUR experience with Game Streaming? Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today! © 2018 Scott Hanselman. All rights reserved.