Critical Development

Language design, framework development, UI design, robotics and more.

Archive for the ‘Visual Studio Extensibility’ Category

Locate Project Item in Solution Explorer – Improved

Posted by Dan Vanderboom on August 24, 2010

Way back in March 2008, I wrote an article sharing a macro to locate the current document in Solution Explorer without having to have the annoying setting “Track current item in Solution Explorer” always turned on.  I map this to my F1 key (I get my help online) and all is good in the world.

When I wrote this macro, I remember reading some documentation for VS2003 or VS2005 saying that it wasn’t possible to update this “Track current item” setting programmatically.  This has changed, and in VS2008 and VS2010 these can indeed be triggered programmatically.  Rob posted the much abbreviated code in one of the comments.

Public Sub TrackProjectItem()
    DTE.ExecuteCommand("View.TrackActivityinSolutionExplorer", True)
    DTE.ExecuteCommand("View.TrackActivityinSolutionExplorer", False)
    DTE.ExecuteCommand("View.SolutionExplorer")
End Sub

Thanks to Rob and all the others who left comments on that article.

Posted in Visual Studio, Visual Studio Extensibility | 1 Comment »

Filtering with Metadata in the Managed Extensibility Framework

Posted by Dan Vanderboom on September 19, 2009

The Managed Extensibility Framework (MEF) is the new extensibility framework from Microsoft.  Pioneered by Glenn Block in the patterns & practices group, and leveraged by the behemoth Visual Studio 2010, it has a striking resemblance to my own Inversion of Control (IoC) and Dependency Injection (DI) framework—which led to me to have a couple great conversations about IoC with Glenn at Tech Ed 2008 and then again at PDC 2008.

But MEF isn’t really written to be your IoC.  Instead, the IoC engine and DI aspects are implementation details, allowing you to do really no more than “MEF things together”.  The core concept of MEF is to provide very simple and powerful application composability.  Not in the user interface composition sense—for that, see Prism for WPF and Silverlight (explained in MSDN Magazine, September 2008)—but for virtually all other dynamic component assembly needs, MEF is your best friend.

The two things I like most about MEF is its simplicity as its lack of presumption on how it will be used.  Compose collections of strings, single method delegates, or implementations of complex services.  All you’re doing is importing and exporting things, with little code required to wire things up.

MEF is currently in its seventh preview release, so expect beta-like quality.  My own experience with it has been very positive, but there are a number of shortcomings in the API.  This article is about a few of them and what can be done to add some much-needed functionality.

System.AddIn vs. MEF

There’s been some confusion with Microsoft coopetition among products with similar aims, and extensibility and composition are no exception.  The AddIn API (team blog) serves a similar purpose as MEF.  (See this two-part MSDN article on System.AddIn: first and second.)  The primary differentiator, from my understanding, is that the AddIn API is a bit more robust and a lot more complicated, and supports such things as isolating extensions in separate AppDomains.

With Visual Studio siding with MEF, it’s personally hard for me to imagine using the AddIn API.  If MEF is flexible and robust enough for Visual Studio, is it really likely to fall short for my own much smaller software systems?  Krzysztof Cwalina suggests they are complementary approaches, but I find that hard to swallow.  Why would I want to use two different extensibility frameworks instead of one coherent API?  If anything, I imagine that the lessons learned from the AddIn API will eventually migrate to MEF.

Daniel Moth notes that with the AddIn API, “there are many design decisions to make and quite a few subtleties in implementing those decisions in particular when it comes to discovering addins, version resiliency, isolation from the host etc.”  A customer of mine using the AddIn API was using a Visual Studio plug-in to manage pipelines, and things were a real mess.  There were a bunch of assemblies, a lot of generated code, and not much clarity or confidence that it was all really necessary.

MEF: Import & ImportMany

In MEF, the Import attribute allows you to inject a value that is exported somewhere else using the Export attribute—typically from another assembly.  There is also an ImportMany attribute which is useful when you expect several exports that use the same contract.  By defining an IEnumerable<T> field or property and decorating it with the ImportMany attribute, all matching exports will be added to an enumerable type.

[ImportMany]
public IEnumerable<IVehicle> Vehicles;

What if you want to filter the exported vehicle types by some kind of metadata, though?  Let’s take a look at the IVehicle contract and some concrete classes that implement the contract.

public interface IVehicle { }

[Export(typeof(IVehicle))]
[ExportMetadata("Speed", "Slow")]
public class ToyotaPrius : IVehicle
{
    public ToyotaPrius() { }
}

[Export(typeof(IVehicle))]
[ExportMetadata("Speed", "Fast")]
public class LamborghiniDiablo : IVehicle
{
    public LamborghiniDiablo() { }
}

The object model isn’t very interesting, but that’s not the point.  What is interesting is that MEF allows us to supply metadata corresponding to our exports.  In this case, my contrived example has defined a metadata variable of “Speed”, with two possible values: “Fast” and “Slow”.  The variable name must be a string, but its value can be any value; that is, any value that’s supported from within an attribute, which means string literals and constants, type objects, and the like.

Filtering Imports on Metadata

What if you want to ImportMany for all exports that have a particular metadata value?  Unfortunately, there are no such options in the ImportMany attribute class.

In my scenario, I’ve defined a static factory class called VehicleFactory, which at some imaginary point in the future will be responsible for building a city full of trafic.

public static class TrafficFactory
{
    // type initialization fails without a static constructor
    static TrafficFactory() { }

    public static IEnumerable<IVehicle> SlowVehicles =
        App.Container.GetExportedValues<IVehicle>(metadata => metadata.ContainsKeyWithValue("Speed", "Slow"));

    public static IEnumerable<IVehicle> FastVehicles =
        App.Container.GetExportedValues<IVehicle>(metadata => metadata.ContainsKeyWithValue("Speed", "Fast"));

    public static IDictionary<object, IVehicle> AllVehicles =
        App.Container.GetKeyedExportedValues<IVehicle>("Speed");
}

This is what I want to do, but there is no overload of GetExportedValues that supplies a metadata-dependent predicate function.  Adding one is easy, though.  While we’re at it, we’ll also add the ContainsKeyWithValue which I borrow from The Code Junky article also on MEF container filtering.

public static class IDictionaryExtensions
{
    public static bool ContainsKeyWithValue<KeyType, KeyValue>(
        this IDictionary<KeyType, ValueType> Dictionary,
        KeyType Key, ValueType Value)
    {
        return (Dictionary.ContainsKey(Key) && Dictionary[Key].Equals(Value));
    }
}

public static class MEFExtensions
{
    public static IEnumerable<T> GetExportedValues<T>(this CompositionContainer Container,
        Func<IDictionary<string, object>, bool> Predicate)
    {
        var result = new List<T>();

        foreach (var PartDef in Container.Catalog.Parts)
        {
            foreach (var ExportDef in PartDef.ExportDefinitions)
            {
                if (ExportDef.ContractName == typeof(T).FullName)
                {
                    if (Predicate(ExportDef.Metadata))
                        result.Add((T)PartDef.CreatePart().GetExportedValue(ExportDef));
                }
            }
        }

        return result;
    }
}

Now we can test this logic by wiring up MEF and then accessing the two filtered collections of cars, which will each contain a single IVehicle instance.

class App
{
    [Export]
    public CompositionContainer Container;

    static void Main(string[] args)
    {
        AssemblyCatalog catalog = new AssemblyCatalog(Assembly.GetExecutingAssembly());
        Container = new CompositionContainer(catalog);
        Container.ComposeParts();

        var FastCars = TrafficFactory.FastVehicles;
        var SlowCars = TrafficFactory.SlowVehicles;
    }
}

Viola!  We have metadata-based filtering.

You’ll also noticed that I added an Export attribute to the Container itself.  By doing this, you can Import the container into any module that gets dynamically loaded.  It’s not used in this article, but getting to the container from a module is otherwise impossible without some kind of work-around.  (Thanks for pointing out the problem, Damon.)

Using Metadata to Assign Dictionary Keys

Let’s take this one step further.  Let’s say you want to import many instances of MEF exported values into a Dictionary, using one of the metadata properties as the key.  This is how I’d like it to work:

public static IDictionary<object, IVehicle> AllVehicles =
    App.Container.GetKeyedExportedValues<IVehicle>("Speed");

Again, the current MEF Preview doesn’t support this, but another extension method is all we need.  We’ll add two, so that one version gives us all exported values and the other allows us to filter that selection based on other metadata.

public static IDictionary<object, T> GetKeyedExportedValues<T>(this CompositionContainer Container,
    string MetadataKey, Func<IDictionary<string, object>, bool> Predicate)
{
    var result = new Dictionary<object, T>();

    foreach (var PartDef in Container.Catalog.Parts)
    {
        foreach (var ExportDef in PartDef.ExportDefinitions)
        {
            if (ExportDef.ContractName == typeof(T).FullName)
            {
                if (Predicate(ExportDef.Metadata))
                    result.Add(ExportDef.Metadata[MetadataKey], 
                        (T)PartDef.CreatePart().GetExportedValue(ExportDef));
            }
        }
    }

    return result;
}

public static IDictionary<object, T> GetKeyedExportedValues<T>(this CompositionContainer Container,
    string MetadataKey)
{
    return GetKeyedExportedValues<T>(Container, MetadataKey, metadata => true);
}

Add an assignment to TrafficFactory.AllVehicles in the App.Main method and see for yourself that it works.

If you’re using metadata values as Dictionary keys, it’s probably important for you not to mess them up.  I recommend using enum values for both metadata property names as well as valid values if it’s possible to enumerate them, and string const values otherwise.

Now go forth and start using MEF!

Posted in Algorithms, Component Based Engineering, Composability, Design Patterns, Visual Studio Extensibility | Tagged: , , , , | 4 Comments »

Better Tool Support for .NET

Posted by Dan Vanderboom on September 7, 2009

Productivity Enhancing Tools

Visual Studio has come a long way since its debut in 2002.  With the imminent release of 2010, we’ll see a desperately-needed overhauling of the archaic COM extensibility mechanisms (to support the Managed Package Framework, as well as MEF, the Managed Extensibility Framework) and a redesign of the user interface in WPF that I’ve been pushing for and predicted as inevitable quite some time ago.

For many alpha geeks, the Visual Studio environment has been extended with excellent third-party, productivity-enhancing tools such as CodeRush and Resharper.  I personally feel that the Visual Studio IDE team has been slacking in this area, providing only very weak support for refactorings, code navigation, and better Intellisense.  While I understand their desire to avoid stepping on partners’ toes, this is one area I think makes sense for them to be deeply invested in.  In fact, I think a new charter for a Developer Productivity Team is warranted (or an expansion of their team if it already exists).

It’s unfortunately a minority of .NET developers who know about and use these third-party tools, and the .NET community as a whole would without a doubt be significantly more productive if these tools were installed in the IDE from day one.  It would also help to overcome resistance from development departments in larger organizations that are wary of third-party plug-ins, due perhaps to the unstable nature of many of them.  Microsoft should consider purchasing one or both of them, or paying a licensing fee to include them in every copy of Visual Studio.  Doing so, in my opinion, would make them heroes in the eyes of the overwhelming majority of .NET developers around the world.

It’s not that I mind paying a few hundred dollars for these tools.  Far from it!  The tools pay for themselves very quickly in time saved.  The point is to make them ubiquitous: to make high-productivity coding a standard of .NET development instead of a nice add-on that is only sometimes accepted.

Consider just from the perspective of watching speakers at conferences coding up samples.  How many of them don’t use such a tool in their demonstration simply because they don’t want to confuse their audience with an unfamiliar development interface?  How many more demonstrations could they be completing in the limited time they have available if they felt more comfortable using these tools in front of the masses?  You know you pay good money to attend these conferences.  Wouldn’t you like to cover significantly more ground while you’re there?  This is only likely to happen when the tool’s delivery vehicle is Visual Studio itself.  Damon Payne makes a similar case for the inclusion of the Managed Extensibility Framework in .NET Framework 4.0: build it into the core and people will accept it.

The Gorillas in the Room

CodeRush and Resharper have both received recent mention in the Hanselminutes podcast (episode 196 with Mark Miller) and in the Deep Fried Bytes podcast (episode 35 with Corey Haines).  If you haven’t heard of CodeRush, I recommend watching these videos on their use.

For secondary information on CodeRush, DXCore, and the principles with which they were designed, I recommend these episodes of DotNetRocks:

I don’t mean to be so biased toward CodeRush, but this is the tool I’m personally familiar with, has a broader range of functionality, and it seems to get the majority of press coverage.  However, those who do talk about Resharper do speak highly of it, so I recommend you check out both of them to see which one works best for you.  But above all: go check them out!

Refactor – Rename

Refactoring code is something we should all be doing constantly to avoid the accumulation of technical debt as software projects and the requirements on which they are based evolve.  There are many refactorings in Visual Studio for C#, and many more in third-party tools for several languages, but I’m going to focus here on what I consider to be the most important refactoring of them all: Rename.

Why is Rename so important?  Because it’s so commonly used, and it has such far-reaching effects.  It is frequently the case that we give poor names to identifiers before we clearly understand their role in the “finished” system, and even more frequent that an item’s role changes as the software evolves.  Failure to rename items to accurately reflect their current purpose is a recipe for code rot and greater code maintenance costs, developer confusion, and therefore buggy logic (with its associated support costs).

When I rename an identifier with a refactoring tool, all of the references to that identifier are also updated.  There might be hundreds of references.  In the days before refactoring tools, one would accomplish this with Find-and-Replace, but this is dangerous.  Even with options like “match case” and “match whole word”, it’s easy to rename the wrong identifiers, rename pieces of string literals, and so on; and if you forget to set these options, it’s worse.  You can go through each change individually, but that can take a very long time with hundreds of potential updates and is a far cry from a truly intelligent update.

Ultimately, the intelligence of the Rename refactoring provides safety and confidence for making far-reaching changes, encouraging more aggressive refactoring practices on a more regular basis.

Abolishing Magic Strings

I am intensely passionate about any tool or coding practice that encourages refactoring and better code hygiene.  One example of such a coding practice is the use of lambda expressions to select identifiers instead of using evil “magical strings”.  From my article on dynamically sorting Linq queries, the use of “magic strings” would force me to write something like this to dynamically sort a Linq query:

Customers = Customers.Order("LastName").Order("FirstName", SortDirection.Descending);

The problem here is that “LastName” and “FirstName” are oblivious to the Rename refactoring.  Using the refactoring tool might give me a false sense of security in thinking that all of my references to those two fields have been renamed, leading me to The Pit of Despair.  Instead, I can define a function and use it like the following:

public static IOrderedEnumerable<T> Order<T>(this IEnumerable<T> Source, 
    Expression<Func<T, object>> Selector, SortDirection SortDirection)
{
    return Order(Source, (Selector.Body as MemberExpression).Member.Name, SortDirection);
}

Customers = Customers.Order(c => c.LastName).Order(c => c.FirstName, SortDirection.Descending);

This requires a little understanding of the structure of expressions to implement, but the benefit is huge: I can now use the refactoring tool with much greater confidence that I’m not introducing subtle reference bugs into my code.  For such a simple example, the benefit is dubious, but multiply this by hundreds or thousands of magic string references, and the effort involved in refactoring quickly becomes overwhelming.

Coding in this style is most valuable when it’s a solution-wide convention.  So long as you have code that strays from this design philosophy, you’ll find yourself grumbling and reaching for the inefficient and inelegant Find-and-Replace tool.  The only time it really becomes an issue, then, is when accessing libraries that you have no control over, such as the Linq-to-Entities and the Entity Framework, which makes extensive use of magic strings.  In the case of EF, this is mitigated somewhat by your ability to regenerate the code it uses.  In other libraries, it may be possible to write extension methods like the Order method shown above.

It’s my earnest hope that library and framework authors such as the .NET Framework team will seriously consider alternatives to, and an abolition of, “magic strings” and other coding practices that frustrate otherwise-powerful refactoring tools.

Refactoring Across Languages

A tool is only as valuable as it is practical.  The Rename refactoring is more valuable when coding practices don’t frustrate it, as explained above.  Another barrier to the practical use of this tool is the prevalence of multiple languages within and across projects in a Visual Studio solution.  The definition of a project as a single-language container is dubious when you consider that a C# or VB.NET project may also contain HTML, ASP.NET, XAML, or configuration XML markup.  These are all languages with their own parsers and other language services.

So what happens when identifiers are shared across languages and a Rename refactoring is executed?  It depends on the languages involved, unfortunately.

When refactoring a C# class in Visual Studio, the XAML’s x:Class value is also updated.  What we’re seeing here is cross-language refactoring, but unfortunately it only works in one direction.  There is no refactor command to update the x:Class value from the XAML editor, so manually changing it causes my C# class to become sadly out of sync.  Furthermore, this seems to be XAML specific.  If I refactor the name of an .aspx.cs class, the Inherits attribute of the Page directive in the .aspx file doesn’t update.

How frequent do you think it is that someone would want to change a code-behind file for an ASP.NET page, and yet would not want to change the Inherits attribute?  Probably not very common (okay, probably NEVER).  This is a matter of having sensible defaults.  When you change an identifier name in this way, the development environment does not respond in a sensible way by default, forcing the developer to do extra work and waste time.  This is a failure in UI design for the same reason that Intellisense has been such a resounding success: Intellisense anticipates our needs and works with us; the failure to keep identifiers in sync by default is diametrically opposed to this intelligence.  This represents a fragmented and inconsistent design for an IDE to possess, thus my hope that it will be addressed in the near future.

The problem should be recognized as systemic, however, and addressed in a generalized way.  Making individual improvements in the relationships between pairs of languages has been almost adequate, but I think it would behoove us to take a step back and take a look at the future family of languages supported by the IDE, and the circumstances that will quickly be upon us with Microsoft’s Oslo platform, which enables developers to more easily build tool-supported languages (especially DSLs, Domain Specific Languages). 

Even without Oslo, we have seen a proliferation of languages: IronRuby, IronPython, F#, and the list goes on.  A refactoring tool that is hard-coded for specific languages will be unable to keep pace with the growing family of .NET and markup languages, and certainly unable to deal with the demands of every DSL that emerges in the next few years.  If instead we had a way to identify our code identifiers to the refactoring tool, and indicate how they should be bound to identifiers in other languages in other files, or even other projects or solutions, the tools would be able to make some intelligent decisions without understanding each language ahead of time.  Each language’s language service could supply this information.  For more information on Microsoft Oslo and its relationship to a world of many languages, see my article on Why Oslo Is Important.

Without this cross-language identifier binding feature, we’ll remain in refactoring hell.  I offered a feature suggestion to the Oslo team regarding this multi-master synchronization of a model across languages that was rejected, much to my dismay.  I’m not sure if the Oslo team is the right group to address this, or if it’s more appropriate for the Visual Studio IDE team, so I’m not willing to give up on this yet.

A Default of Refactor-Rename

The next idea I’d like to propose here is that the Rename refactoring is, in fact, a sensible default behavior.  In other words, when I edit an identifier in my code, I more often than not want all of the references to that identifier to change as well.  This is based on my experience in invoking the refactoring explicitly countless times, compared to the relatively few times I want to “break away” that identifier from all the code that references.

Think about it: if you have 150 references to variable Foo, and you change Foo to FooBar, you’re going to have 150 broken references.  Are you going to create a new Foo variable to replace them?  That workflow doesn’t make any sense.  Why not just start editing the identifier and have the references update themselves implicitly?  If you want to be aware of the change, it would be trivial for the IDE to indicate the number of references that were updated behind the scenes.  Then, if for some reason you really did want to break the references, you could explicitly launch a refactoring tool to “break references”, allowing you to edit that identifier definition separately.

The challenge that comes to mind with this default behavior concerns code that spans across solutions that aren’t loaded into the IDE at the same time.  In principle, this could be dealt with by logging the refactoring somewhere accessible to all solutions involved, in a location they can all access and which gets checked into source control.  The next time the other solutions are loaded, the log is loaded and the identifiers are renamed as specified.

Language Property Paths

If you’ve done much development with Silverlight or WPF, you’ve probably run into the PropertyPath class when using data binding or animation.  PropertyPath objects represent a traversal path to a property such as “Company.CompanyName.Text”.  The travesty is that they’re always “magic strings”.

My argument is that the property path is such an important construct that it deserves to be an core part of language syntax instead of just a type in some UI-platform-specific library.  I created a data binding library for Windows Forms for which I created my own property path syntax and type, and there are countless non-UI scenarios in which this construct would also be incredibly useful.

The advantage of having a language like C# understand property path syntax is that you avoid a whole class of problems that developers have used “magic strings” to solve.  The compiler can then make intelligent decisions about the correctness of paths, and errors can be identified very early in the cycle.

Imagine being able to pass property paths to methods or return then from functions as first-class citizens.  Instead of writing this:

Binding NameTextBinding = new Binding("Name") { Source = customer1; }

… we could write something like this, have access to the Rename refactoring, and even get Intellisense support when hitting the dot (.) operator:

Binding NameTextBinding = new Binding(@Customer.Name) { Source = customer1; }

In this code example, I use the fictitious @ operator to inform the compiler that I’m specifying a property path and not trying to reference a static property called Name on the Customer class.

With property paths in the language, we could solve our dynamic Linq sort problem cleanly, without using lambda expressions to hack around the problem:

Customers = Customers.Order(@Customer.LastName).Order(@Customer.FirstName, SortDirection.Descending);

That looks and feels right to me.  How about you?

Summary

There are many factors of developer productivity, and I’ve established refactoring as one of them.  In this article I discussed tooling and coding practices that support or frustrate refactoring.  We took a deep look into the most important refactoring we have at our disposal, Rename, and examined how to get the greatest value out of it in terms of personal habits, as well as long-term tooling vision and language innovation.  I proposed including property paths in language syntax due to its general usefulness and its ability to solve a whole class of problems that have traditionally been solved using problematic “magic strings”.

It gives me hope to see the growing popularity of Fluent Interfaces and the use of lambda expressions to provide coding conventions that can be verified by the compiler, and a growing community of bloggers (such as here and here) writing about the abolition of “magic strings” in their code.  We can only hope that Microsoft program managers, architects, and developers on the Visual Studio and .NET Framework teams are listening.

Posted in Data Binding, Data Structures, Design Patterns, Development Environment, Dynamic Programming, Functional Programming, Language Innovation, LINQ, Oslo, Silverlight, Software Architecture, User Interface Design, Visual Studio, Visual Studio Extensibility, Windows Forms | Leave a Comment »

VS Macro to Locate Solution Explorer Item – Corrected

Posted by Dan Vanderboom on April 8, 2008

A few weeks ago, I posted an article about a Visual Studio macro (with source code) to locate and select an item in Solution Explorer corresponding to the active document.  I discovered that Solution Explorer loads its UIHierarchy items on demand, so items only become available in thet tree structure if their parent node has been expanded.

The key to solving this was a single line of code:

solution.FindProjectItem(DTE.ActiveDocument.FullName).ExpandView()

This forces Solution Explorer to load the necessary items and also expands the tree so that the item becomes visible.  After that, selecting the item becomes possible.

I’ve updated the original article to correct the source code, and from what I can tell from more extensive testing, it seems to work all the time now.

Posted in My Software, Visual Studio, Visual Studio Extensibility | Leave a Comment »

Visual Studio Macro: Locate Item in Solution Explorer on Demand

Posted by Dan Vanderboom on March 21, 2008

[ For an updated macro that works best with VS2008 and VS2010, see the followup article. ]

By default, Visual Studio’s Solution Explorer will update its selected item based on the currently active document.  This is occassionally useful when exploring unfamiliar code, but it’s annoying when it’s always on, bouncing around as you navigate through larger solutions.

Do you want to disable project item tracking?

 

Go to Tools—>Options, expand Projects and Solutions, select General, and then uncheck the box next to Track Active Item in Solution Explorer.

But what if you’re looking at a document, and you want to know where in Solution Explorer it’s located?  Unfortunately, you would have to go back into Tools—>Options, …, turn that option back on, locate the project item, and then turn it back off again.

This is very inconvenient.  What would be ideal is a button or hot key that would jump to that item in Solution Explorer just once, on demand.  “Where’s the corresponding project item for this document?  Find it and then don’t track after that.”

After about two hours of messing around with the VS SDK, trying to read the documentation, finding out that this particular option isn’t available for manipulation through automation, and resolving to figure out another way to do it, I finally accomplished this goal and I’m happy to share the macro that I created.

Don’t know how to create a macro?

 

Adding a macro is easy.  First open the Macro Explorer.  Press Alt-F8, or if you’ve remapped your keys, go to Tools—>Macros—>Macro Explorer.  You’ll probably have a project in there called MyMacros.  If not, right-click on Macros and select New Macro Project…  A window pops up, and you can enter a project name.  Once you have selected or created a macro project, right click on the project name and select New module…  I named mine Utilities.  Right click on the module name and select Edit.  You should now see a VB code editor window.

Copy the following code into the editor and press Ctrl-S to save it.

Imports System
Imports EnvDTE
Imports EnvDTE80
Imports EnvDTE90

Public Module Utilities
    Public Sub TrackProjectItem()
        Dim solution As Solution2 = DTE.Solution
        If Not solution.IsOpen OrElse DTE.ActiveDocument Is Nothing Then Return

        solution.FindProjectItem(DTE.ActiveDocument.FullName).ExpandView()

        Dim FileName As String = DTE.ActiveDocument.FullName

        Dim SolutionExplorerPath As String
        Dim items As EnvDTE.UIHierarchyItems = DTE.ToolWindows.SolutionExplorer.UIHierarchyItems
        Dim item As Object = FindItem(items, FileName, SolutionExplorerPath)

        If item Is Nothing Then
            MsgBox("Couldn't find the item in Solution Explorer.")
            Return
        End If

        DTE.Windows.Item(Constants.vsWindowKindSolutionExplorer).Activate()
        DTE.ActiveWindow.Object.GetItem(SolutionExplorerPath).Select(vsUISelectionType.vsUISelectionTypeSelect)
    End Sub

    Public Function FindItem(ByVal Children As UIHierarchyItems, ByVal FileName As String, ByRef SolutionExplorerPath As String) As Object
        For Each CurrentItem As UIHierarchyItem In Children
            Dim TypeName As String = Microsoft.VisualBasic.Information.TypeName(CurrentItem.Object)
            If TypeName = "ProjectItem" Then
                Dim projectitem As EnvDTE.ProjectItem = CType(CurrentItem.Object, EnvDTE.ProjectItem)
                Dim i As Integer = 1
                While i <= projectitem.FileCount
                    If projectitem.FileNames(i) = FileName Then
                        SolutionExplorerPath = CurrentItem.Name
                        Return CurrentItem
                    End If
                    i = i + 1
                End While
            End If

            Dim ChildItem As UIHierarchyItem = FindItem(CurrentItem.UIHierarchyItems, FileName, SolutionExplorerPath)
            If Not ChildItem Is Nothing Then
                SolutionExplorerPath = CurrentItem.Name + "\" + SolutionExplorerPath
                Return ChildItem
            End If
        Next
    End Function
End Module

 

Now let’s hook it up to a keyboard hot key to make it super fast to access.  If you open Tools—>Options, expand Environment, select Keyboard, and in the text box labeled Show commands containing, type part of the macro name, such as TrackProjectItem.  You’ll see your new macro in the list highlighted.  In the dropdown control labeled Use new shortcut in, select Text Editor.  Click on the box that says Press shortcut keys, and press a key combination.  I chose Alt-T (for Track), which was available in that context to my surprise.

Try it out.  Open up a few code or XML windows.  Make one of those windows active and press your hot key.  The item in Solution Explorer that corresponds to the current document window will activate.

 

Tags: , , ,

Posted in Invention-A-Day, Visual Studio, Visual Studio Extensibility | 47 Comments »