Oracle to start charging for Java

I’ve been following this update for Java for many months since initally seeing the popup warning when updating Java on my local machine. And now finally getting around to blog about it, now the options have become clearer. And finding that some people still aren’t aware about the pending update in January 2019 that will mean will require a commercial Java licence if you want any further security updates from the Oracle version of Java.

I brought this topic up at the Sitecore Discussion Club
http://sitecore.events/

 

Java Public Updates

 

When I started to follow this, the policy for the LTS release of Java 11 hadn’t been announced, and I was thinking would need to update to Java 11 to continue to get free updates. Solr is up to supporting Java 10 (since 7.3.0), however sitecore doesn’t support a version of Solr this high yet https://kb.sitecore.net/articles/227897, and anyway Java 11 is now not free anyway, so not an option to get free updates that way.

Further reading:

Oracle Java - what’s the cost?

The cost has come down since I started following this.

Originally the price was:

 

Per CPU   Support
Java SE Advanced   $5,000   $1,100
Java SE Suite   $15,000   $3,300

 

Which is quite a large up front cost, and continual support cost.

Then it was announced the subscription licence, charging $25 per CPU per month with support. Going down to $12.50 with volume discount (possibly lower with Enterprise agreement), which works out cheaper than the previous licencing.

A note on mapping an Oracle Processor Licence to an Azure vCPU:

https://www.oracle.com/assets/cloud-licensing-070579.pdf

“Microsoft Azure – count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, and one vCPU as equivalent to one Oracle Processor license if hyperthreading is not enabled.”

If you’ve got 2 solr servers, and 3 zookeeper servers, all running as Quad Cores, that’s 5x4=20 CPU’s, which might be up to 20x$25=$500 a month. Or if the hyperthreading detail applies above, might ve $250 a month. And even cheaper if have a load of servers to volume licence. Would require Java licencing to confirm actual price for your particular scenario.

Open JDK

What the alternative’s to paying Oracle for commercial support?

Open JDK is the alternative.

https://blog.joda.org/2018/09/do-not-fall-into-oracles-java-11-trap.html

Is anyone using Open JDK?

If you look at the docker community, most Docker images which use Java use an Open JDK variant.

(was saying DEPRECATED - This image is officially deprecated in favor of the Open JDK image, and will receive no further updates after 2016-12-31 (Dec 31, 2016). Please adjust your usage accordingly.)
https://hub.docker.com/_/java/

Shows previously issues/questions on legality of Java on Docker
https://blog.takipi.com/running-java-on-docker-youre-breaking-the-law/

Explains issues, and update on support now
https://devops.stackexchange.com/questions/433/is-there-no-oracle-jdk-for-docker

Flow chart of which OpenJDK to choose for Docker
https://medium.com/@hudsonmendes/docker-spring-boot-choosing-the-base-image-for-java-8-9-microservices-on-linux-and-windows-c459ec0c238

Update on Oracle Java support for Docker
https://blogs.oracle.com/developers/official-docker-image-for-oracle-java-and-the-openjdk-roadmap-for-containers

Official Java Docker images, but appears have to sign up to see them.
https://hub.docker.com/_/oracle-serverjre-8

There are many version of Open JDK, but which version should I use?

This posts lists out the different flavours of the Open JDK
https://blog.joda.org/2018/09/time-to-look-beyond-oracles-jdk.html

Adopt Open JDK appears to be the main choice, with longer term updates, and actually free.
https://adoptopenjdk.net/

Red Hat are supporting it (and IBM which have now bought Red Hat)
https://developers.redhat.com/blog/2018/09/24/the-future-of-java-and-openjdk-updates-without-oracle-support/

Amazon are releasing their own version of Open JDK, which they use internally, but still in public preview, but certainly one to look out for, and not just for use on AWS - can use off AWS which sounds awesome.

I noticed on this tweet that the version of Java Azure App Service was running was Azul.
https://twitter.com/dancruickshank/status/1072541500058284034

A quick google later I found that was announced in September, that if you are an Azure customer, can use Azul Open JDK for no extra cost. So that’s awesome too if you are an Azure customer, but this is just for use on Azure.

Summary

Ultimately you’ve got to way up the pros/cons for your particular scenario, pay a monthly fee to stay on official Oracle Java with security updates. Or switch to the Open JDK, and pick the variant which fits you well the best - and ensure your Java software is compatible with the Open JDK you’ve chosen.

It would seem if looking to move to Docker & Kubernetes eventually, then embracing the Open JDK seems to be the standard anyway. And even if Oracle engineers aren’t going to be supporting the Open JDK anymore, got a choice of Red Hat/IBM, Amazon and Azul (free on Azure) to go for.



Why have to use InProc session state on Sitecore Content Authoring

A note on why InProc session state & sticky sessions must be used for Sitecore Content Authoring. The official Sitecore documentation was mentioning that Content Authoring could use a shared session state provider, but after testing, and being in contact with Sitecore support this documentation was update to reflect reality at the moment.

https://kb.sitecore.net/articles/860809

Bad Practice to use Sticky Sessions

A quick intro on why Sticky Sessions are bad. https://docs.microsoft.com/en-us/azure/architecture/guide/design-principles/scale-out

Avoid instance stickiness. Stickiness, or session affinity, is when requests from the same client are always routed to the same server. Stickiness limits the application’s ability to scale out. For example, traffic from a high-volume user will not be distributed across instances. Causes of stickiness include storing session state in memory, and using machine-specific keys for encryption. Make sure that any instance can handle any request.

Depending on your routing mechanism, you may also find that your disribution of load across your servers is uneven. With one server being overworked, and other servers being underutilised.
E.g. if you reboot a server, everyone is going to lose their session, and fail over to another server. Then when the server comes back up, new sessions to be routed to the under utilised server, but those previous sessions have been stacked onto another machine.
Or if you use sticky sessions via IP address, and lots of people work in the same office, all of that office will be allocated to the same machine.

 

Sticky Session

 

Redis provider network stability

The Stack Exchange redis session state provider at the time often had network stability issues, particularly on Azure Redis (TLS) https://github.com/StackExchange/StackExchange.Redis/issues/871
Version 2 has since been released, which would be worth testing to see how much this has improved the issue.

Too much data being put into Session state

However, another issue which couldn’t be solved, was that too much data was being stored in session state.

  • 228534 - Jsnlog entries spam session storage
  • 228355 - Validation related objects spam session storage

The JsnLog entries can be disabled by changing \App_Config\Include\Sitecore.JSNLog.config <jsnlog enabled=“false”

For validation though, there is no easy work around. You can disable the validation bar - so no new validation messages are loaded with each Content Editor refresh. But new objects will be added if Validation is triggered manually.

ASP.NET WebForms Legacy

And this is on top of the Content Editor still using WebForms with ViewState and ControlState needing to be stored to a shared medium, which may as well also be Redis is using a Redis session state provider. (Although a Database is another option if you have spare DTUs, why not be consistent)

Redis is designed for small objects

https://azure.microsoft.com/en-gb/blog/investigating-timeout-exceptions-in-stackexchange-redis-for-azure-redis-cache/

Redis Server and StackExchange.Redis are optimized for many small requests rather than fewer large requests. Splitting your data into smaller chunks may improve things here

A large session state, viewstate, control state isn’t going to help.

 

Redis Logo

 

Summary

Ultimately right now, to have more than once Content Authoring server, you have to use Sticky Sessions and InProc session state.

Looking forward to a future version of Sitecore allowing to get away from Sticky Sessions on Content Authoring, either by reducing the amount of data needed in Session Storage, or getting away from Viewstate & ControlState - perhaps with Sitecore Horizon the new editor built in Angular.



Zero down time for Delivery & Authoring

A lot of people talk about zero downtime, but normally just in the context of Content Delivery. Either with Azure PAAS websites and swapping slots, or using Azure Traffic Manager to switch over to a different load balancer/set of VMs.

However one of my colleagues found this article, to achieve Zero* downtime for Content Authoring.
https://sitecorepocs.blogspot.com/2016/06/sitecore-zero-downtime-deployments.html

*Or as near to Zero as possible as the users Session state will expire, as Sitecore requires InProc session state provider on Content Authoring.

 

Zero downtime

 

I brought up in the sitecore discussion club slack channel that this is an area would like to discuss further, as trying to schedule deployments out of hours isn’t great for those who are doing the deployment, and disruptive to the editors who are trying to enter content.
And was put in touch on Slack with the author.

Also at the last Sitecore Discussion club (http://sitecore.events/), the author of this article was there, and presented this idea to the group.

It was interesting to hear that this was created to avoid getting up early to deploy before the authors were in, and that has been used in production, so the idea works.

Although the architecture in the original post was for two websites running on a single machine, and using Lucene indexes, and MSMQ.
There is no reason this can’t be updated to use Azure Message Queue, two independent complete stacks (Content Delivery, Content Authoring, SolrCloud).

At the group we also discussed some of the limitations/ideas

  • rollbacks, as it stands would want to be confident before switching over, as no way without data loss to roll back. Unless add a similar queue going in reverse. Although hopefully with enough testing rollbacks shouldn’t be a frequent occurrence, so might be acceptable.
  • transformation layer, say a field had been renamed, a layer between deserializing the message, and applying it to sitecore, would allow the message to be intercepted and updated to the new deployments expected format.
  • recording content editoring activity like this, could provide a way to record/replay content authoring activities to simulate authoring load in a test environment.

It’s an idea worth exploring further, more updates as we try this out ourselves.



Sitecore (Fast) Queries

There’s quite a few blog posts on avoiding using (fast) Sitecore queries on Content Delivery.

Summary Fast always goes to the database, and even a normal Sitecore query can go from being a slow page, to bringing your site to it’s knees if that page is hit frequently enough.

I’ve had experience of this on inheriting a solution which would do three sitecore queries on a page load for calendar events, for items within a folder. Normally the page would load within a few seconds, not great but usable. However when a search engine got onto the page, and decided to crawl through the calendar navigation links, which are effectively infinite, the execution time grew and grew.

Until the page wouldn’t load in a couple of mins, .net thread pool limits were reached, and the site was effectively down.

The quick fix was to prevent these pages from being indexed, as well as indicate not to follow the calendar links forever.

The longer term fix was to switch to Content Search, using an index to load the content, to speed the page up (as well as keeping the indicators not to follow the infinite links).

(Of course using Sitecore Query for a few sub items, or getting an item by id is acceptable, but never Fast)

So onto the topic of this post, the Link Database certainly has it’s place, finding references/referrers of items for one on Content Authoring being it’s primary usage.

There are some considerations before you start adding a dependency to the Links database in your code. (Hint, use Content Search were possible).

The Links Database, is more of a table which used to reside in the Core database by default until Sitecore 9, and then was moved to the Web database ahead of removing the dependency on the Core database.

We switched back to using the Core database to store the links again for reason I’ll come to shortly.

Scaling Limitation?

If you work in a Scaled environment though, you may have more than one Web or Core database. There can be only one link database(Table) maintained though. So in a multi-region scaled environment, you’ll be travelling across regions to the database with the maintained link database.

Now it might be with database replication that this problem gets resolved, and other regions can query a read only copy of a replicated database which contains the Link Database(table).

However we are avoiding using the Link Database on content delivery so we don’t encounter any scalability issues.

Rebuild Speed

Rebuilding the Link Database occurs in serial (one Databases links after the next get rebuilt in serial) and takes a long time. E.g, 8+ Hours. I understand from Sitecore support there are some experimental patches to speed up making the index rebuild faster/rebuild it in parallel, so far haven’t got around to trying these, but possibly an avenue worth pursuing.

Deadlocks

I understand the situation used to be worse before Sitecore 7.1, were selects didn’t use the nolock hint, but still regularly see deadlocks on this table in production even on Sitecore 9.

SQL Server transaction deadlocks related to the link database could occur when multiple threads were creating and moving items concurrently, for example, when the threads created items that were stored in item buckets. This has been fixed by changing the GetReferrers(Item, ID) method in the SqlLinkDatabase class so that it uses WITH NOLOCK when reading from the Links table. (401393) 7.1 release notes

 

Example of Traffic representing a Deadlock

 

Bad Execution Plan

Sometimes when Sql Server evaluates a query on the Link Database(table), depending on the values being queried, it was deciding to do a TableScan to find an entry. This would get saved as the execution plan, and all the queries would go slower/use up more DTU’s, until the execution plan was discarded.

High DTU usage

We noticed that every publish, there was a DTU spike. We even saw some issues on the live site because of these Bad Execution Plan, combined with these High DTU spikes, on other pages trying to query the same database under load. So something had to be done, other than the immediate increase the DTU limits on the database server, and throw money at the problem.

The following setting cut down the DTU spikes we were seeing on every publish, as we don’t use the Links Database on Content Delivery, was a quick win for us.

Disables incremental updates to the link database during publishing operations as an experimental optimization to speed up publishing (25% on average).

Enabling Parallel Publishing and related optimizations

<setting name="LinkDatabase.UpdateDuringPublish" value="false"/>

Fixing the Bad Execution Plan

It was choosing to do a Table Scan, because unfortunately there was no covering index for the query, because one of the columns is of type “ntext” which prevents it from being part of the index.

Sitecore confirmed that according to MS documentation that ntext will be removed from future versions of Sql Server. So in the future this column type will change.

IMPORTANT! ntext, text, and image data types will be removed in a future version of SQL Server. Avoid using these data types in new development work, and plan to modify applications that currently use them. Use nvarchar(max), varchar(max), and varbinary(max) instead.

So we are currently trying out applying the following change to the Links Database(Table)

ALTER TABLE [Links] ALTER COLUMN TargetPath NVARCHAR(MAX) NOT NULL
GO

CREATE INDEX ndxLinks_SourceItemId_SourceDatabase_ALL ON [Links]
([SourceItemID], [SourceDatabase])
INCLUDE ([SourceLanguage], [SourceVersion], [SourceFieldID], [TargetDatabase], [TargetItemID], [TargetLanguage], [TargetVersion], [TargetPath] )
WITH (ONLINE=ON, SORT_IN_TEMPDB=ON)

And so far doesn’t appear to be causing much overhead to writes, is having the desired affect on preventing a bad execution plan and queries are using the new index, and to review if this has any beneficial impact to the deadlocks.

To update further once released to production/have more feedback. Hopefully these changes will come through in a future update of the sitecore product.

Summary

Be careful before taking a dependency on the Links Database, your architecture in the future if you are planning on moving to a scaled active/active multi region setup might not support it.

If you are using the Links Database on Content Delivery then you won’t have this luxury of being able to turn off this setting to not update the Web links database on publish. If you aren’t using the Links Database, then maybe turn off this feature to save some DTU’s.

If you are also seeing slow queries of the links database/high DTU’s, maybe try out this Sql Schema/Index and test yourself, or wait for this to become part of the sitecore product.

I’d recommend using Sitecore Content Search where possible.



Following an upgrade to Sitecore 9.0 update 2, from Sitecore 8.2 update 6, spotted that index rebuilds of the default indexes Core, Web & Master were taking much longer than they were before.

Talking to rebuild these 3 index in parallel under 50 mins in Sitecore 8.2, now taking over 6 hours in Sitecore 9 (sometime 20 hours+), for ~14 million items in each of the web and master databases.

 

6+ hours for a rebuilt. Ain't nobody got time for that

 

This was using the same SolrCloud infrastructure which had been upgraded ahead of the Sitecore 9 upgrade, same size VMs for sitecore indexing server, same index batch sizes & threads.

<setting name="ContentSearch.ParallelIndexing.MaxThreadLimit value="15" />
<setting name="ContentSearch.ParallelIndexing.BatchSize" value="1500" />

Looking at the logs could see they were flooded with messages.

XXXX XX:XX:XX WARN More than one template field matches. Index Name : sitecore_master_index Field Name : XXXXXXXXX

Initial discussions with Sitecore Support were to apply some patches to filter out the messages being written to the log files. bug #195567
However this felt more like treating the symptoms rather than the cause.

With performance still only being slightly improved, using reflection and overrides, tried to patch the behaviour in SolrFieldNameTranslator to not need to write theses warnings to the log files in the first place. Unfortunately the code had lots of private non virtual methods, and implemented an internal interface, which proved quite tricky to override, without requiring IL modification, so really was something for Sitecore to fix.
But even after all this, still around 4+ hours to rebuild the index on a good day.

I observed an individual rebuild of the Core index was quite fast on it’s own, ~5 mins. But Sitecore Support confirmed that the algorithm used, would use resource stealing, to make the jobs finish about the same time each other (Slow job would steal resource from faster job).
And confirmed in Sitecore 8.2 update 6 all indexes were taking a similar time when run in parallel.
Work Stealing in Task Scheduler
Blog on Work Stealing

Resources on the servers, and DTU usage on the database were minimal. So didn’t appear to be maxing out.

So what was the issue, some locking, or job scheduling changed in Sitecore 9?

Well to find the answer some performance traces were required, from a test environment where could replicate this issue.
After enough performance traces were performed, Sitecore support observed that there were lots of idle threads doing nothing.
Which was odd on a server with 16 cores, and 15 threads allocated for indexing.
Sitecore support were then able to find the bug, The bug is specific to the strategy OnPublishEndAsynchronousSingleInstanceStrategy which was being used by the web index.
This strategy overrides Run() method and initialises LimitedConcurrencyLevelTaskSchedulerForIndexing singleton with the incorrect MaxThreadLimit value.

 

On Publish End Asynchronous Single Instance Strategy

 

This code appears to be the same in previous versions, likely we were using onPublishEndAsync rather than onPublishEndAsyncSingleInstance before the upgrade.

Ask for bug fix #285903 from Sitecore support if you are affected by this, so your config settings don’t get overwritten.



Recently I was looking at building a sitecore search domain index (See Domain vs God index), which had quite a few calculated fields. Lots of the calculated fields were based off similar information about the parent nodes of the current item. And for each calculated field I was performing the same look ups again and again per field on the item.

I thought there has got to be a way to improve this, and found a forum post back from 2015 of someone asking the same question, and with a response of someone else who had solved it for one of the projects they were working on. Indexing multiple fields at same time

In the given answer it seems quite easy to override this method in the DocumentBuilder.
However using DotPeek it would appear somewhere since 2015 and Sitecore 8.2 update 6 the method which contains the logic I want to override has been made private, quite possibly when parallel indexing was introduced. As now the code forks in two, and both reference a private method which contains the logic I want to overwrite.
And that private method calls another private method. :/

 

Private methods

 

Until Sitecore add in (or add back in) the extensibility points I need,
It’s Reflection to the rescue.

 

Reflect all the things

 

But reflection is slow, so let’s improve that performance by using Delegates (Improving Reflection Performance with Delegates)

Edit - it appears the SolrDocumentBuilder isn’t a Singleton, so moved reflection from constructor into a static constructor.


using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Reflection;
using System.Threading.Tasks;
using Sitecore.ContentSearch;
using Sitecore.ContentSearch.ComputedFields;
using Sitecore.ContentSearch.Diagnostics;
using Sitecore.ContentSearch.SolrProvider;
using Sitecore.Diagnostics;

public class SolrDocumentBuilderCustom : SolrDocumentBuilder
{
    private delegate void AddFieldDelegate(SolrDocumentBuilder documentBuilder, string fieldName, object fieldValue, string returnType);
	private static readonly AddFieldDelegate _addFieldDelegate;
	static SolrDocumentBuilderCustom()
	{
        var solrDocumentBuilderType = typeof(SolrDocumentBuilder);
        var addFieldMethod = solrDocumentBuilderType.GetMethod("AddField",
            BindingFlags.Instance | BindingFlags.NonPublic,
            null,
            new[]
            {
                typeof(string),
                typeof(object),
                typeof(string)
            },
            null);

        _addFieldDelegate = (AddFieldDelegate)Delegate.CreateDelegate(typeof(AddFieldDelegate), addFieldMethod);
	}
public SolrDocumentBuilderCustom(IIndexable indexable, IProviderUpdateContext context) : base(indexable, context) { } public override void AddComputedIndexFields() { if (this.IsParallelComputedFieldsProcessing) this.AddComputedIndexFieldsInParallel(); else this.AddComputedIndexFieldsInSequence(); } protected override void AddComputedIndexFieldsInParallel() { ConcurrentQueue<Exception> exceptions = new ConcurrentQueue<Exception>(); this.ParallelForeachProxy.ForEach<IComputedIndexField>((IEnumerable<IComputedIndexField>)this.Options.ComputedIndexFields, this.ParallelOptions, (Action<IComputedIndexField, ParallelLoopState>)((field, parallelLoopState) => this.AddComputedIndexField(field, parallelLoopState, exceptions))); if (!exceptions.IsEmpty) throw new AggregateException((IEnumerable<Exception>)exceptions); } protected override void AddComputedIndexFieldsInSequence() { foreach (IComputedIndexField computedIndexField in this.Options.ComputedIndexFields) this.AddComputedIndexField(computedIndexField, (ParallelLoopState)null, (ConcurrentQueue<Exception>)null); } private new void AddComputedIndexField(IComputedIndexField computedIndexField, ParallelLoopState parallelLoopState = null, ConcurrentQueue<Exception> exceptions = null) { Assert.ArgumentNotNull((object)computedIndexField, nameof(computedIndexField)); object fieldValue; try { fieldValue = computedIndexField.ComputeFieldValue(this.Indexable); } catch (Exception ex) { CrawlingLog.Log.Warn(string.Format("Could not compute value for ComputedIndexField: {0} for indexable: {1}", (object)computedIndexField.FieldName, (object)this.Indexable.UniqueId), ex); if (!this.Settings.StopOnCrawlFieldError()) return; if (parallelLoopState != null) { parallelLoopState.Stop(); exceptions.Enqueue(ex); return; } throw; }
        if (fieldValue is List<Tuple<string, object, string>>)
        {
            var fieldValues = fieldValue as List<Tuple<string, object, string>>;
            if (fieldValues.Count <= 0)
            {
                return;
            }

            foreach (var field in fieldValues)
            {
                if (!string.IsNullOrEmpty(field.Item3) && !Index.Schema.AllFieldNames.Contains(field.Item1))
                {
                    _addFieldDelegate(this, field.Item1, field.Item2, field.Item3);
                }
                else
                {
                    AddField(field.Item1, field.Item2, true);
                }
            }
        }
        else
        {
if (!string.IsNullOrEmpty(computedIndexField.ReturnType) && !this.Index.Schema.AllFieldNames.Contains(computedIndexField.FieldName)) {
                _addFieldDelegate(this, computedIndexField.FieldName, fieldValue, computedIndexField.ReturnType);
} else { this.AddField(computedIndexField.FieldName, fieldValue, true); } } } }

You can then (as per the forum post referenced) return a List of Tuple’s from you computed index field, which all get added to the index in one go, without having to re-process shared logic for each field (assuming you have any).

var result = new List<Tuple<string, object, string>>
        {
            new Tuple<string, object, string>("solrfield1", value1, "stringCollection"),
            new Tuple<string, object, string>("solrfield2", value2, "stringCollection"),
            new Tuple<string, object, string>("solrfield3", value3, "stringCollection")
        };

End Result

For my particular case with over 10+ calculated fields which could be combined,
I got index rebuild time down from 1 hour & 8 mins down to 22 mins on my local dev machine.

I then went on further to improve index rebuild times, by restricting which part of the tree the domain index crawls.

Seems I’m not the only one who’s indexes can benefit from this, and hopefully either sitecore will add support for this, or make it easier to extend again in the future without nasty reflection.

Happy Sitecoring!



Fun with logic apps, Azure functions and twitter

While studying for the Microsoft 70-532 exam, I wanted to take a look at Azure functions & Logics apps.

Having gone through this example “Create a function that integrates with Azure Logic Apps”

It left me with some questions on how to improve it. E.g. I don’t want to receive an email per tweet.

So after some searching, I came across a new feature called batching “Send, receive, and batch process messages in logic apps” but even after the batch had been reached, each message in the batch would result in an individual email. Logic apps Compose

Then I came across this blog “Azure Logic Apps – Aggregate a value from an array of messages”

And the Compose feature was what I wanted. Composing first the message I want out of each tweet. Then combining those messages together, into the format I want to email.

Logic apps Compose

I also wanted to make some improvements, to not get retweets, and filter tweets to the right language “How to Exclude retweets and replies in a search api” “How to master twitter search”

Search Tweets

And here is the final result, twitter search result of original tweets filtered by language combined into a single email Combined Email



Updating from VS2015 CTP6 to RC

A number of changes have been made to names.

Watch this for further details:

Video: ASP.NET 5 Community Standup - Mar 10th, 2015 - The Big Rename

Key slides:

Renamed tools Renamed folder and packages

If you install visual studio, the DNVM and DNX will be setup for you.
To install Visual Studio RC, first uninstall visual studio CTP 6.
If you aren’t installing visual studio, and want to use the command line to install .NET Version Manager (DNVM) run the following command, you’ll need Powershell V3 for this.

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "&{$Branch='dev';iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.ps1'))}"

Then to install .NET Execution Environment (DNX) run the following command

dnvm upgrade

I got some warnings, to remove the old environmental variable KRE_HOME

WARNING: Found a KRE_HOME environment variable. This variable has been deprecated and should be removed, or it may interfere with DNVM and the .NET Execution environment

To see what is installed, and what is the default run.

dnvm list

To set the coreclr to be used run

dnvm use 1.0.0-beta4 -x64 -r coreclr

Then to run the web server

dnx . web

When running “dnx –watch . web” from the command line, when any code changes are made, the server will stop, but not restart.
In order to get the server to restart after a code change something like this is needed.
When using visual studio this is all handled for you.

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "for(;;) { Write-Output \"Starting...\"; dnx --watch . web }"


Goals

  • To setup Visual Studio 2015 to run an asp.net vNext website
  • Rather than using an Azure pre-configured VM, I wanted to setup Visual Studio on my own hardware.

How to do it/What I learnt

Reading the Release Notes

NOTE: Visual Studio 2015 CTPs are for testing and feedback purposes only. This release is unsupported and are not intended for use on production computers, or to create production code. We strongly recommend only installing this release in a virtual machine, or on a computer that is available for reformatting.

So a VM should be used. So I downloaded Microsoft Virtual PC, only to remember when the VM tried to boot up that it doesn’t work on 64 bit PCs. So starting again, I downloaded VirtualBox

I then created a Windows 7 VM in VirtualBox.
Important Steps, create it with enough space, e.g. 50GB+. It’s surprising how much space need for Windows & Patching, as well as Visual Studio 2015.
In fact to start with I allocated too little space, and had to change the amount of space the VM had allocated, as run out of space while patching the VM.

Here is good article if need to change the space on the Virtual Box VM, and update windows to use the extra space.
The first steps about cloning are optional.
The important steps are, after having turned off the VM, to issue the following command to change the hard drive size:

VBoxManage modifyhd "VMName.vdi" --resize 50000

And then after boot it back up again, to go into Control Panel, Administrative Tools, Computer Management, Disk Management, select the active partition want to expand, right click and select “Extend Volume..”, and allocate the extra space.

Now you’ve got a Windows 7 VM, few important things:

  • Patch it to include SP1
  • Patch it to include IE10/IE11
  • Just patch it up to the latest version & reboot

If you don’t patch it to SP1, then won’t be able to install Visual Studio.
And if you don’t patch it to IE10/IE11, which you get a warning for which I ignored to begin with, I found Visual Studio wouldn’t load the K runtime/allow you to debug or browse an ASP.NET vNext website. I end up on this forum thread
Summary just patch your VM to the latest version

Having got a Patched VM, with enough disk space. Then install Visual Studio 2015 CTP 6. I used the Visual Studio 2015 Ultimate Web Installer released on 23rd of February 2015. And just installed the default options.

You’ll also want to install the K runtime.
First install the K Version Manager(KVM) in Powershell as an administrator.

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/master/kvminstall.ps1'))"

As detailed on the Asp.net Github page

Now KVM is installed, you’ll want to get the K runtime.

kvm upgrade

Having done that you can see what is installed by calling

kvm list

And you should see something like:
KVM list

Notice is used the \.k\ folder, earlier version where using \.kre\. You may read older documentation that refers to \.kre\.

You may want to look at switching to use the coreclr runtime as the Active version.

kvm use 1.0.0-beta3 -x64 -r coreclr

followed by

kvm list

And you should see something like, notice the Active * has now updated:
KVM list

Now you’ve got a VM, K runtime setup.

Open up visual studio and create a new project.
Visual Studio Create Project

Select a blank project
Select blank project

Now have that created, let’s setup the welcome page for ASP.net Vnext. Open startup.cs, and add the following line

app.UseWelcomePage();
Modify startup.cs

This won’t compile at the moment as you don’t have a reference to the extension method.

To do that open project.json, and add a reference to “Microsoft.AspNet.Diagnostics”: “1.0.0-beta3” I’ve also added some references to “Microsoft.AspNet.Hosting” & “Microsoft.AspNet.Server.WebListener”, and added the command “web”, so can launch the site from the command prompt. Modify project.json
Make sure the version of k runtime in the project.json matches what you have installed, e.g. “1.0.0-beta3”. When you save the file, visual studio should automatically restore the packages from NuGet.

In visual studio you can configure which version of K the application uses VS configure KRE version

You should either now be able to launch the site from either Visual Studio or Command prompt.

To start the site via command prompt, navigate to the website directory, and run

k web

Now you can navigate to the site as defined in the project.json configuration

Startup Page