Quantcast
Channel: The Old New Thing
Viewing all 24428 articles
Browse latest View live

Why does my C++/WinRT project get errors of the form “unresolved external symbol … consume_Something”?

$
0
0

You set up a new C++/WinRT project and build it, and everything looks great.

#include <winrt/Windows.Gaming.Input.h>

void CheckGamepads()
{
    auto gamepads =
        winrt::Windows::Gaming::Input::Gamepad::Gamepads();
    for (auto&& gamepad : gamepads)
    {
        check(gamepad);
    }
}

The code builds just fine except that you get a linker error that makes no sense. (Let’s face it, most linker errors make no sense until you put on your linker-colored glasses.)

error LNK2019: unresolved external symbol "public: struct winrt::Windows::Foundation::Collections::IIterator<struct winrt::Windows::Gaming::Input::Gamepad> __thiscall winrt::impl::consume_Windows_Foundation_Collections_IIterable<struct winrt::Windows::Foundation::Collections::IIterable<struct winrt::Windows::Gaming::Input::Gamepad>,struct winrt::Windows::Gaming::Input::Gamepad>::First(void)const " (?First@?$consume_Windows_Foundation_Collections_IIterable@U?$IIterable@W4Gamepad@Gaming@Input@Windows@winrt@@@Collections@Foundation@Windows@winrt@@W4Gamepad@Gaming@Input@45@@impl@winrt@@QBE?AU?$IIterator@W4Gamepad@Gaming@Input@Windows@winrt@@@Collections@Foundation@Windows@3@XZ) referenced in function "struct winrt::Windows::Foundation::Collections::IIterator<struct winrt::Windows::Gaming::Input::Gamepad> __stdcall winrt::impl::begin<struct winrt::Windows::Foundation::Collections::IIterable<struct winrt::Windows::Gaming::Input::Gamepad>,0>(struct winrt::Windows::Foundation::Collections::IIterable<struct winrt::Windows::Gaming::Input::Gamepad> const &)" (??$begin@U?$IIterable@W4Gamepad@Gaming@Input@Windows@winrt@@@Collections@Foundation@Windows@winrt@@$0A@@impl@winrt@@YG?AU?$IIterator@W4Gamepad@Gaming@Input@Windows@winrt@@@Collections@Foundation@Windows@1@ABU?$IIterable@W4Gamepad@Gaming@Input@Windows@winrt@@@3451@@Z)

What the heck is going on here?

Take away all the decorations and it boils down to this:

unresolved external symbol "winrt::impl::consume_...IIterable<...>::First()" referenced in function "begin(winrt::IIterable<...> const&)."

The linker couldn’t find a definition for the First method.

The answer from the linker’s point of view is obvious: You called this consume_BlahBlah method but never defined it.

Yeah, so tell me something I don’t know.

Each C++/WinRT header file contains the information needed to call methods on the classes in that namespace. In our case, we included Windows.Gaming.Input.h, which tells us how to call methods on winrt::Windows::Gaming::Input::Gamepad objects. That made it possible to call Gamepad::Gamepads(). The resulting gamepads variable is a winrt::Windows::Foundation::Collections::IVectorView<Gamepad>. We then use a ranged for statement to enumerate them, and that means that we’re calling methods on the gamepads object, which means that we’re calling methods on a winrt::Windows::Foundation::Collections::IVectorView object.

Ah, but we never told the compiler how to call the methods of IVectorView. The Windows.Gaming.Input.h header file included only the information to allow the methods of Gamepad to be called. “Okay, I got you all set up for Gamepad.” Any types required from other interfaces were left as forward declarations. “If you need them, you can get the definitions yourself.”¹

We used those forward declarations without ever defining them, hence the linker error.

The solution is to include the required header file for the namespace.

#include <winrt/Windows.Foundation.Collections.h>

This is one of those rookie mistakes that make you scratch your head the first time you encounter it. The need to include the header file is mentioned in a big green box in the documentation, but that’s not much consolation after you lost a few hours trying to figure it out.

There’s some good news and bad news about this error message.

The good news is that this error message is going away. The bad news is that it’s being replaced with a different error message. (But hopefully the new one is easier to understand.) More details next time.

¹ The idea is that you pay only for the namespaces you use. If every header file included its transitive closure of dependencies, (1) you would create circular dependencies, and (2) including a single header file would end up including all the other header files when you chased through all the dependencies.

The idea of “pay for play” is not unique to C++/WinRT. The C++ standard library follows the same principle. If you want std::string, you need to #include <string>. If you include a header file that has a method that takes a string, you will end up with only enough information to call that method. It doesn’t mean that you get all of <string> automatically.

 

The post Why does my C++/WinRT project get errors of the form “unresolved external symbol … consume_Something”? appeared first on The Old New Thing.


Why does my C++/WinRT project get errors of the form “consume_Something: function that returns ‘auto’ cannot be used before it is defined”?

$
0
0

Last time, we investigated a mysterious error that occurs when linking a C++/WinRT project, and I noted that there’s some good news and some bad news. The good news is that this error message is going away. The bad news is that it’s being replaced with a different error message that you have to learn.

Let’s take another look at the code that triggers this error.

#include <winrt/Windows.Gaming.Input.h>

void CheckGamepads()
{
    auto gamepads =
        winrt::Windows::Gaming::Input::Gamepad::Gamepads();
    for (auto&& gamepad : gamepads)
    {
        check(gamepad);
    }
}

Instead of getting a linker error, you get a compile-time error at the point you attempt to consume an interface whose header file you failed to include.

test.cpp(7): error C3779: winrt::impl::consume_Windows_Foundation_Collections_IIterable<D,winrt::Windows::Gaming::Input::Gamepad>::First': a function that returns 'auto' cannot be used before it is defined
with
[
    D=winrt:::Windows:::Gaming:::Input:::Gamepad
]
note: see declaration of 'winrt::impl::consume_Windows_Foundation_Collections_IIterable<D,winrt::Windows::Gaming::Input::Gamepad>::First'
with
[
    D=winrt::Windows::Gaming::Input::IVisualCollection
]

For the impatient: The problem is that you are missing the header file for the interface you are using. In this case, we are using Windows.Foundation.Collections.IIterable, so we need to include

#include <winrt/Windows.Foundation.Collections.h>

You can read the pull request that makes the change to detect the error at compile time rather than link time.

The trick is that the forward-declared methods are declared as returning auto with no trailing return type and no body. This means “I want the compiler to deduce the return type (but I’m not giving any clues yet).” If you try to call the method before the method has been implemented, then the compiler reports an error because it doesn’t yet have the necessary information to determine the return type.

Hopefully the new error message will make it easier to figure out what went wrong. At least it gives you a file name and line number that points to the place where the unimplemented method is used, and the error mesage includes the name of the type whose definition is missing.

 

The post Why does my C++/WinRT project get errors of the form “consume_Something: function that returns ‘auto’ cannot be used before it is defined”? appeared first on The Old New Thing.

I called AdjustTokenPrivileges, but I was still told that a necessary privilege was not held

$
0
0

A customer had a service running as Local System and wanted to change some token information. The information that they wanted to change required Se­Tcb­Privilege, so they adjusted their token privileges to enable that privilege, but the call still failed with ERROR_PRIVILEGE_NOT_HELD: “A required privilege is not held by the client.”

Here’s sketch of their code. All function calls succeed except the last one.

HANDLE processToken;
OpenProcessToken(GetCurrentProcess(), TOKEN_ALL_ACCESS,
    &processToken);

HANDLE newToken;
DuplicateTokenEx(processToken, TOKEN_ALL_ACCESS, nullptr,
    SECURITY_MAX_IMPERSONATION_LEVEL, TokenPrimary, &newToken);

TOKEN_PRIVILEGES privileges;
privileges.PrivilegeCount = 1;
privileges.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED;
LookupPrivilegeValue(nullptr, SE_TCB_NAME,
    &privileges.Privileges[0].Luid);

AdjustTokenPrivileges(newToken, FALSE, &privileges, 0,
    nullptr, nullptr);

DWORD sessionId = ...;
SetTokenInformation(newToken, TokenSessionId, &sesionId,
    sizeof(sessionId)); // FAILS!

This fails because we adjusted the privileges of the wrong token!

The TCB privilege needs to be enabled on the token that is performing the operation, not the token that is the target of the operation. Because you need privileges to do things, not to have things done to you.

The security folks explained that the correct order of operations is

  1. Impersonate­Self().
  2. Open­Thread­Token().
  3. Adjust­Token­Privileges(threadToken).
  4. Do the thing you wanna do. (In this case, duplicate the token and change the session ID.)
  5. Close the thread token.
  6. Revert­To­Self().

The overall sequence therefore goes like this:

void DoSomethingAwesome()
{
 if (ImpersonateSelf(SecurityImpersonation)) {
  HANDLE threadToken;
  if (OpenThreadToken(GetCurrentThread(), TOKEN_ALL_ACCESS,
                      TRUE, &threadToken)) {
   TOKEN_PRIVILEGES privileges;
   privileges.PrivilegeCount = 1;
   privileges.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED;
   if (LookupPrivilegeValue(nullptr, SE_TCB_NAME,
                            &privileges.Privileges[0].Luid)) {
    if (AdjustTokenPrivileges(newToken, FALSE, &privileges, 0,
                              nullptr, nullptr)) {
     // Now do the thing you wanna do.
     HANDLE newToken;
     if (DuplicateTokenEx(threadToken, TOKEN_ALL_ACCESS, nullptr,
                          SECURITY_MAX_IMPERSONATION_LEVEL,
                          TokenPrimary, &newToken)) {
      DWORD sessionId = ...;
      if (SetTokenInformation(newToken, TokenSessionId,
                              &sesionId, sizeof(sessionId))) {
       // Hooray
      }
      CloseHandle(newToken);
     }
    }
   }
   CloseHandle(threadToken);
  }
  RevertToSelf();
 }
}

Of course, in real life, you probably would use RAII types to ensure that handles get closed and to remember to Revert­To­Self() after a successful Impersonate­Self().

 

The post I called AdjustTokenPrivileges, but I was still told that a necessary privilege was not held appeared first on The Old New Thing.

A bug so cool that the development team was reluctant to fix it

$
0
0

Long ago, there was a big filed against Outlook that was titled “Outlook crashes when used violently.”

Well that’s an interesting title.

The bug was also interesting: What you had to do was create a Note and then drag it around the screen continuously for several minutes. Eventually, Outlook crashed.

What was happening was that each time the Note window moved, even just one pixel, Outlook created an entry in its Undo history. Drag the window around long enough, and the Undo history fills up with remembered Note positions, until eventually you run out of memory and crash.

If you stopped before you ran out of memory, then you could use this bug as a parlor trick: Press and hold the Undo hotkey Ctrl+Z, and the Note will zoom around the screen, retracing its steps.

This bug was so cool that the development team was reluctant to fix it.

They did fix it, but it was accompanied by a twinge of regret.

The post A bug so cool that the development team was reluctant to fix it appeared first on The Old New Thing.

In the file copy conflict dialog, what happened to the option to copy the new file with a numeric suffix?

$
0
0

If you use Explorer to paste a file into a directory, and there’s already a file with that name in the directory, you get a Replace or Skip Files dialog with options to replace the existing file, skip the file, or compare info for both files. A customer remembered that in earlier versions of Windows, there was no option to compare info, but there was an option to copy the file with a new name, so that the directory contains both the old file (with its original name) and the new file (with a numeric suffix). Where did that option go?

Replace or Skip Files?
Copy 1 item from Documents to Desktop
The destination already has a file named “Awesome.txt”
Replace the file in the destination
Skip this file
🗎🗎 Compare info for both files

The option to keep both files is still there. It’s hiding under Compare info for both files.

1 File Conflict
Which files do you want to keep?
If you select both versions, the copied file will have a number added to its name.
  ☐ Files from Documents ☐ Files from Desktop  
 
  Awesome.txt  
 
🗎 11/12/2015 5:00:00 PM
10.6 KB
🗎 7/29/2015 5:00;00 PM
10.2 KB
 
 
 
 
☐ Skip 0 files with the same date and size
Continue   Cancel  

In the resulting dialog, you are given information about the conflicting files and can select which version you want. If you select both versions, then the copied file will have a number added to its name.

 

The post In the file copy conflict dialog, what happened to the option to copy the new file with a numeric suffix? appeared first on The Old New Thing.

Why does my C++/WinRT project get errors of the form ‘winrt::impl::produce‘: cannot instantiate abstract class, missing method GetBindingConnector

$
0
0

So your C++/WinRT project gets build failures of the form

base.h(8208): error C2259: 'winrt::impl::produce<D, I>': cannot instantiate abstract class
with
[
    D=winrt::YourNamespace::implementation::YourClass,
    I=winrt::Windows::UI::Xaml::Markup::IComponentConnector2
] (compiling source file YourClass.cpp)
base.h(8208): note: due to following members: (compiling source file YourClass.cpp)
base.h(8208): note: 'int32_t winrt::impl::abi<winrt::Windows::UI::Xaml::Markup::IComponentConnector2, void>::type::GetBindingConnector(int32_t, void *, void **) noexcept': is abstract (compiling source file YourClass.cpp)

Normally, the Get­Binding­Connector function is defined in YourClass.xaml.g.hpp, but that header file isn’t being generated.

What’s going on, and how do you fix it?

The problem is that you forgot to include the header file

#include "winrt/Windows.UI.Xaml.Markup.h"

Add that line to, say, your precompiled header file, and things should work again.

You are likely to run into this problem when upgrading a project from C++/WinRT 1.0 to C++/WinRT 2.0. The C++/WinRT 2.0 compiler is much better about reducing header file dependencies, which improves build times. If you forgot to include winrt/Windows.UI.Xaml.Markup.h in a C++/WinRT 1.0 project, you often got away with it, because some other C++/WinRT 1.0 header file you included happened to include winrt/Windows.UI.Xaml.Markup.h as a side effect. You were getting a free ride on the other header file.

 

The post Why does my C++/WinRT project get errors of the form ‘<CODE>winrt::</CODE><CODE>impl::</CODE><CODE>produce<D, I></CODE>‘: cannot instantiate abstract class, missing method <CODE>GetBindingConnector</CODE> appeared first on The Old New Thing.

Why does my C++/WinRT project get errors of the form “Unresolved external symbol void* __cdecl winrt_make_YourNamespace_YourClass(void)“?

$
0
0

So your C++/WinRT project gets build failures of the form

unresolved external symbol "void * __cdecl winrt_make_YourNamespace_YourClass(void)" (?winrt_make_YourNamespace_YourClass@YAPAXXZ) referenced in function
void * __stdcall winrt_get_activation_factory(class std::basic_string_view<wchar_t, struct std::char_traits<wchar_t> > const &)" (?winrt_get_activation_factory@@YGPAXABV?$basic_string_view@_WU?$char_traits@_W@std@@@std@@@Z)

What’s going on, and how do you fix it?

The problem is that you used the -opt flag with cppwinrt.exe, but didn’t do the work necessary to supoprt those optimizations.

To each of your implementation files (such as YourClass.cpp), add the line

#include "YourClass.g.cpp"

If your project defines classes in multiple Windows Runtime namespaces, then the inclusion should be

#include "Sub/Namespace/YourClass.g.cpp"

If you specified the -prefix option, then the inclusion should be

#include "Sub.Namespace.YourClass.g.cpp"

(Personally, I put it immediately after the inclusion of the corresponding YourClass.h header file.)

In a Visual Studio project, you can enable optimizations by setting

<CppWinRTOptimized>true</CppWinRTOptimized>

in your project file.

To turn on dotted prefixes, you can set

<CppWinRTUsePrefixes>true</CppWinRTUsePrefixes>

The main optimization enabled by the -opt flag in C++/WinRT 2.0 is bypassing the call to Ro­Get­Activatation­Factory if the class is implemented in the same module. Instead, the call goes directly to the implementation. This also removes the need to declare the runtime class in your manifest if it is used only within the module (say, by XAML binding).

 

The post Why does my C++/WinRT project get errors of the form “Unresolved external symbol <CODE>void* __cdecl winrt_</CODE><CODE>make_</CODE><CODE>YourNamespace_</CODE><CODE>YourClass(void)</CODE>“? appeared first on The Old New Thing.

The Resource Compiler defaults to CP_ACP, even in the face of subtle hints that the file is UTF-8

$
0
0

The Resource Compiler assumes that its input is in the ANSI code page of the system that is doing the compiling. This may not be the same as the ANSI code page of the system that the .rc was authored on, nor may it be the same as the ANSI code page of the system that will consume the resulting resources.

It also completely ignores any clues in the file itself.

The saga begins in 1981.

At this time, code pages roamed the earth. There was no way to know what encoding to use for a file; you just assumed it was the ambient code page for the system that opened the file and hoped for the best.

This is the world the Resource Compiler was born into.

STRINGTABLE BEGIN
IDS_MYSTRING "Hello, world."
END

Some years later, Unicode was invented, and the Resource Compiler let you indicate that you wanted a Unicode string by using the L prefix.

STRINGTABLE BEGIN
IDS_MYSTRING L"Hello, world."
END

In the above case, the L didn’t have any effect since the string itself limits itself to 7-bit ASCII. But let’s say that you used a fancy apostrophe in the Windows-1252 code page.

STRINGTABLE BEGIN
IDS_MYSTRING L"What’s up?"
END

There are two things to note. First is that you need to put the L prefix on the string to get it to be interpreted as Unicode. And second, the apostrophe is encoded as the single byte 92h because the file is in the Windows-1252 code page.

Now, it’s possible that the system doing the compiling isn’t using Windows-1252 as its default code page. For example, you might author the files in Windows-1252 because your main office is in Redmond, Washington, but you then send the file to your Japanese office, and their code page is 932. The byte sequence 92h 73h means “apostrophe, small Latin letter s” in the Windows-1252 code page, but in code page 932, that byte sequence represents the character 痴. When the Japanese office compiles your resource script, they get What痴 up?. This is already embarrassing enough, but it’s compounded by the fact that the character 痴 means gonorrhea.

To avoid this problem, the Resource Compiler lets you declare the code page in which the subsequent lines should be interpreted. This removes any dependency on the execution environment of the compiler.

#pragma code_page(1252)

STRINGTABLE BEGIN
IDS_MYSTRING L"What’s up?"
END

Some years later, UTF-8 was introduced. This created an interesting problem, because you might load a file as Windows-1252, but then when you save it, your text editor “helpfully” converts it to UTF-8. This change often goes undetected because file comparison tools will frequently “helpfully” normalize the two files into a common encoding before comparing them, thereby hiding the encoding change.

And then you get a bug that says “Garbage characters in message. Message is supposed to say What’s up?, but instead it says What’s up?.”

What happened is that the byte 92h in Windows-1252 was re-encoded into UTF-8 as the bytes E2h 80h 99h. Those bytes then were interpreted by the compiler as Windows-1252, resulting in ’. The presence of a UTF-8 BOM at the start of the file was a subtle hint that the file was really UTF-8 encoded, but computers aren’t very good at subtlety. They just follow the rules they were given, and that rule is “Interpret the bytes in the system ANSI code page unless given explicit instructions to the contrary.”

The fix is to give explicit instructions to the contrary. Put this at the top of the file:

#pragma code_page(65001) // UTF-8

Now save the file in UTF-8.

Now you’re all set. Text editors nowadays will happily “help” you out by silently converting to UTF-8, but I don’t know of any that silently convert to Windows-1252.

 

The post The Resource Compiler defaults to CP_ACP, even in the face of subtle hints that the file is UTF-8 appeared first on The Old New Thing.


How can I determine in a C++ header file whether C++/CX is enabled? How about C++/WinRT?

$
0
0

Suppose you’re writing a header file that wants to take advantage of C++/CX or C++/WinRT features if the corresponding functionality is available.

// async_event_helpers.h

#if (? what goes here ?)

// RAII type to ensure that a C++/CX deferral is completed.

template<typename T>
struct ensure_complete
{
   ensure_complete(T^ deferral) : m_deferral(deferral) { }
   ~ensure_complete() { if (m_deferral) m_deferral->Complete(); }

  ensure_complete(ensure_complete const&) = delete;
  ensure_complete& operator=(ensure_complete const&) = delete;

  ensure_complete(ensure_complete&& other)
  : m_deferral(std::exchange(other.m_deferral, {})) { }
  ensure_complete& operator=(ensure_complete&& other)
  { m_deferral = std::exchange(other.m_deferral, {}); return *this; }

private:
   T^ m_deferral;
};
#endif

#if (? what goes here?)

// RAII type to ensure that a C++/WinRT deferral is completed.

template<typename T>
struct ensure_complete
{
   ensure_complete(T const& deferral) : m_deferral(deferral) { }
   ~ensure_complete() { if (m_deferral) m_deferral.Complete(); }

  ensure_complete(ensure_complete const&) = delete;
  ensure_complete& operator=(ensure_complete const&) = delete;

  ensure_complete(ensure_complete&&) = default;
  ensure_complete& operator=(ensure_complete&&) = default;

private:
   T m_deferral{ nullptr };
};
#endif

What magic goes into the #if statement to enable the corresponding helpers only if the prerequisites have been met?

For C++/CX, the magic incantation is

#ifdef __cplusplus_winrt

If C++/CX is enabled, then the __cplusplus_winrt symbol is defined as the integer 201009, which is presumably a version number.

For C++/WinRT, the magic symbol is

#ifdef CPPWINRT_VERSION

This is defined to a string literal representing the version of C++/WinRT that is active. In addition to serving as a feature detector, this macro is used to ensure that all of the C++/WinRT header files you use are compatible with each other. (If not, you will get a compile-time assertion failure.)

The C++/WinRT team cautions that this is the only macro in the C++/WinRT header file that is supported for feature detection. Do not rely on the other WINRT_* macros in the C++/WinRT header files. They are implementation details and may change at any time.

 

The post How can I determine in a C++ header file whether C++/CX is enabled? How about C++/WinRT? appeared first on The Old New Thing.

What order do the items in the “New” menu appear? It looks kind of random.

$
0
0

When you right-click on an empty space in an Explorer folder and select the New menu item, you always start with Folder and Shortcut, but the rest seems to be a jumbled list of file types.

Folder
Shortcut
 
 
 
Microsoft Access Database
Bitmap image
Contact
Microsoft Word Document
Microsoft PowerPoint Presentation
Microsoft Publisher Document
Rich Text Document
Text Document
Microsoft Excel Worksheet
Compressed (zipped) folder

The list looks jumbled, but it’s a very specific kind of jumbled.

The items in the New menu are discovered by looking for Shell­New subkeys in HKEY_CLASSES_ROOT. And a side effect of the way Explorer walks through the registry and collects the results is that they end up sorted alphabetically by file extension.

  Folder
  Shortcut
 
 
 
.accdb Microsoft Access Database
.bmp Bitmap image
.contact Contact
.docx Microsoft Word Document
.pptx Microsoft PowerPoint Presentation
.pub Microsoft Publisher Document
.rtf Rich Text Document
.txt Text Document
.xlsx Microsoft Excel Worksheet
.zip Compressed (zipped) folder

This behavior is not contractual. It’s just an artifact of the implementation. Maybe it’ll change someday.

The post What order do the items in the “New” menu appear? It looks kind of random. appeared first on The Old New Thing.

If you can use GUIDs to reference files, why not use them to remember “recently used” files so they can survive renames and moves?

$
0
0

You can ask for a GUID identifier for a file, and use that GUID to access the file later. You can even recover a (perhaps not the) file name from the GUID.

David Trapp wishes programs would use GUIDs to reference files so that references to recently used files can survive renames and moves.

Be careful what you wish for.

It is a common pattern to save a file by performing two steps.

  • Create a temporary file with the new contents.
  • Rename the original file to a *.bak or some other name.
  • Rename the temporary file to the original name.
  • (optional) Delete the *.bak file.

Programs use this multi-step process so that an the old copy of the file remains intact until the new file has been saved successfully. Once that’s done, they swap the new file into place.

Unfortunately, this messes up your GUID-based accounting system.

If you tracked the file by its GUID, then here’s what you see:

  • Create a temporary file, which gets a new GUID.
  • Rename the original file. It retains its GUID but has a new name.
  • Rename the temporary file file. It retains its GUID but has a new name.

The GUID that you remembered does not refer to the new file; it refers to the old file. Even worse, if the program took the optional step of deleting the renamed original, you now have a GUID that refers to a deleted file, which means that when you try to open it, the operation will fail.

Programs can avoid this problem by using the Replace­File function to promote the temporary file. The Replace­File function preserves the file identifier, among other things.

In practice, use of the Replace­File function is not as widespread as you probably would like, so using only GUIDs to track files will technically track the file, but may not track the file you intend. Because people still think of the file name as the identifier for a file, not its GUID.

The post If you can use GUIDs to reference files, why not use them to remember “recently used” files so they can survive renames and moves? appeared first on The Old New Thing.

What should you do if somebody passes a null pointer for a parameter that should never be null? What if it’s a Windows Runtime class?

$
0
0

If you have a function for which a parameter may not be null, what should you do if somebody passes null anyway?

There are multiple layers to this question, depending on the technology you are using, so let’s start small and work our way up.

If the function runs in the same process as the caller, then you can just crash. No security boundary was crossed. The caller has a logic error where they thought something was non-null, but it ended up being null, and there’s no real recovery from a logic error. Dereferencing the null pointer in the normal course of business will result in an access violation, and that will crash the caller’s process (which happens to be the same as the process your function is in).

If possible, crash early in the function, so that the reason is more clear. Put a default value into the output parameter, for example. This convention is fairly common for COM methods, because output pointers are generally expected to contain something on exit, even if the function as a whole fails. (This rule is important in the case where the function call has been marshaled, because the result of the function call needs to be marshaled back to the caller, and if you put garbage in the output parameter, the marshaler will crash trying to copy the results back to the caller.)

If the function runs in a separate process from the caller, then you need to protect the integrity of your process. On Windows, the standard mechanisms for inter-process function calls are COM or RPC (the layer beneath COM). In those cases, the function returns an HRESULTs, and it is common to report E_POINTER to say, “You passed a null pointer when a null pointer isn’t allowed.”

But wait, there’s more. If you are indeed using COM or RPC for your inter-process function calls, then the RPC marshaling layer will check for null pointers so you don’t have to! In your interface definition (IDL) file, you annotate pointer parameters to say whether a null pointer is allowed. If you write [ref], then a null pointer is not allowed, but if you write [unique], then a null pointer is permitted. If you say [in] or [out] without a modifier, then the modifier defaults to the pointer_default for the enclosing class. And if there is no pointer_default declaration, then the default default is ref.

Once you’ve annotated your pointers, the RPC infrastructure does the parameter validation for you. If somebody passes a null pointer for a parameter that is annotated as [ref], then RPC fails the call immediately with the error RPC_X_NULL_REF_POINTER, and the call never reaches your implementation. Of course, if your function was called directly from within the process, it won’t go through the RPC layer, and an invalid pointer can get through.

If you put the cases of in-process and out-of-process callers together, you see that the conclusion is “Go ahead and dereference those pointers.” If the caller is in-process, then it’s okay to crash because you are crashing the caller’s process (which happens to be the same process that you are in). If the caller is out-of-process, then the RPC layer will prevent invalid null pointers from getting through.

There’s an additional wrinkle to this general principle, however, for the case where you are implementing a Windows Runtime class. Windows Runtime objects are primarily consumed through projection, which is the mechanism by which the ugly low-level infrastructure is exposed to higher-level languages in a way that makes more sense for each language. For example, the low-level HSTRING is exposed to C# and Visual Basic as a System.String, to JavaScript as String, to C++/CX as a String^, and to C++/WinRT as a winrt::hstring.

In the case where you are implementing a Windows Runtime class, and somebody passes a null pointer for an input parameter, then instead of crashing, you should return a COM error code, traditionally E_POINTER. This error code will be transformed by the projection into a language-specific exception.

It’s better to convert the invalid null pointer to a language-specific exception because that integrates better with language debugging tools. The debugger will see a language exception and give the developer a chance to inspect the exception to see what went wrong. If you had dereferenced the null pointer in native code, the C# debugger (for example) will report that an exception occurred in unmanaged code, and there is unlikely to be a meaningful stack trace because the exception was generated far away from anything the C# debugger can see.

This principle applies only to input parameters. You can freely dereference output parameters because the projections will always pass a valid output pointer. (The C# developer didn’t pass an output pointer explicitly. The C# developer merely called the method, and your [out, retval] pointer was created by the projection.)

You might have observed that the consequences for passing an invalid null parameter vary depending on whether the method call is marshaled or not. If marshaled, then the result is RPC_X_NULL_REF_POINTER, but if not marshaled, then the result is E_POINTER. While this seems strange, it’s also inconsequential, because any sort of exception from a Windows Runtime method is considered fatal. The process is crashing either way, and the developer studying the crash dumps will know that both RPC_X_NULL_REF_POINTER and E_POINTER mean “You passed a null pointer when you shouldn’t have.”

Related viewing: De-fragmenting C++: Making exceptions more affordable and usable, in particular, the part where Herb Sutter talks about the difference between errors and bugs.

 

The post What should you do if somebody passes a null pointer for a parameter that should never be null? What if it’s a Windows Runtime class? appeared first on The Old New Thing.

Why does SetFocus fail without telling me why?

$
0
0

One of my colleagues was debugging a program and discovered that a call to Set­Focus returned NULL, which the documentation calls out as indicating an error condition. However, a follow-up call to Get­Last­Error() returned 0, which means “Everything is just fine.”

After much debugging, they figured it out: There was a WH_CBT hook that was intercepting the Set­Focus call and rejecting the focus change by returning TRUE.

“It would have been nice to have received a useful error code like ERROR_YOU_JUST_GOT_SCREWED_BY_A_HOOK.”

You don’t get a useful error because the window manager doesn’t know that the hook is screwing with you. You called Set­Focus. The window hook said, “Nope, don’t change focus. It’s all good. No worries.” The window manager says, “Okay, well, then I guess it’s all taken care of.”

Hooks let you modify or even replace certain parts of the window manager. If you do that, then it’s on you to do so in a manner that will not confuse the rest of the system. If your hook wants to make Set­Focus fail, and not set an error code, we’ll that’s your decision. The system is not going to call Set­Last­Error(ERROR_YOU_JUST_GOT_SCREWED_BY_A_HOOK) because that might overwrite an error code set by the hook.

In this specific example, the point of the WH_CBT hook is to assist with computer-based training: The program installs a CBT hook, which can then do things like prevent the program from changing focus so that the window containing the training materials retains focus. The underlying assumption is that a CBT hook is going to mess around only with windows that it is already in cahoots with.

“Oh, this is my print dialog. I’m going to prevent it from taking focus so that my instructions on how to use the printer stays on screen with focus. I’m also going to make changes to my print dialog function so it doesn’t freak out when it fails to get focus.”

Whatever program installed this CBT hook didn’t limit their meddling to windows they already controlled. This means that their actions sowed confusion among other windows that weren’t part of their little game.

I suspect no actual computer-based training was going on at all. The CBT hook was being used not for its stated purpose of computer-based training, but rather because it provided a way to alter the behavior of the window manager in very fundamental ways, and being able to make those alterations fit into the program’s world view somehow.

Somebody who installs a hook can alter the behavior of the system, and it’s important that they do it right, so that their changes still maintain the contracts promised by the system. One of those being that when Set­Focus fails, it tells you why.

Related reading: The case of the file that won’t copy because of an Invalid Handle error message.

 

The post Why does SetFocus fail without telling me why? appeared first on The Old New Thing.

A simple workaround for the fact that std::equal takes its predicate by value

$
0
0

The versions of the std::equal function that takes a binary predicate accepts the predicate by value, which means that if you are using a functor, it will be copied, which may be unnecessary or unwanted.

In my case, the functor had a lot of state, and I didn’t want to copy it.

class comparer
{
  ...

  template<typename R>
  bool ranges_equiv(R const& left, R const& right)
  {
    using T = typename std::decay_t<decltype(*begin(left))>;
    return std::equal(
      begin(left), end(left),
      begin(right), end(right),
      equiv<T>);
  }

  template<typename T>
  bool equiv(T const& left, T const& right) = delete;

  template<>
  bool equiv(Doodad const& left, Doodad const& right)
  {
    return (!check_names || equiv(left.Name(), right.Name())) &&
           (!check_children || ranges_equiv(left.Children(), right.Children()));
  }

  ... other overloads omitted ...
};

The idea behind the comparer is that you configure it with information about what you care about and what you don’t, and then you call equiv and let it walk the object hierarchy comparing the things you asked for according to the rules you specified.

This works great, except that std::equal copies its predicate, and our comparer is somewhat expensive to copy, since it may have lots of configuration std::strings and stuff. What we’re looking for is a version that takes the predicate by reference, so that we can use the same comparer all the way down.

The workaround is to replace the predicate with something that is cheap to copy.

  template<typename R>
  bool ranges_equiv(R const& left, R const& right)
  {
    return std::equal(
      begin(left), end(left),
      begin(right), end(right),
      [this](auto&& l, auto&& r) { return equiv(l, r); });
  }

Instead of passing a full comparer object, we pass a lambda that captures the comparer‘s this pointer. This lambda is cheap to copy, and it allows us to reuse the same comparer all the way down the object hierarchy.

This solution looks obvious in retrospect, but I got all hung up trying to create a cheap copyable object, like a nested type called compare_forwarder that kept a std::reference_wrapper to the comparer, before realizing that I was just writing a verbose version of a lambda.

 

The post A simple workaround for the fact that <CODE>std::equal</CODE> takes its predicate by value appeared first on The Old New Thing.

What is WofCompressedData? Does WOF mean that Windows is a dog?

$
0
0

A customer doing performance analysis of their program discovered that there were reads from an alternate data stream called Wof­Compressed­Data. On the Internet, if you search for “Wof­Compressed­Data”, you mostly see people wondering what it is. Some people suspect that it’s malware, and others suspect (or even state confidently) that it’s an artifact of anti-malware software and can be deleted.

What is Wof­Compressed­Data?

The documentation for wofapi.h says merely “This header is used by Data Access and Storage.” For more information, it refers you to another web page that contains no additional information.

WOF stands for Windows Overlay Filter, which is a nice name that doesn’t really tell you much about what it does or what it’s for.

First, let’s look at how Windows was installed before the introduction of the Windows Overlay Filter.

The Windows installation begins with a install.wim file that contains basically all of Windows. A WIM file is a container file, similar in spirit to other container files, like ZIP and Cabinet. Traditionally, the WIM file is copied to the recovery partition for use during emergencies, such as push-button reset. The contents of the WIM file are then uncompressed, and corresponding files are created on your boot volume, and it is these uncompressed files that are used when you run Windows. The WIM file sits in your recovery partition, ignored, but waiting for its opportunity to spring into action should the need arise.

This traditional layout means that every Windows system file is present twice: A compressed copy is in the WIM file on the recovery partition, and an uncompressed copy in the live Windows installation.

Windows 8.1 introduced a feature known as Windows Image File Boot (WIMBoot): A system manufacturer can set up a system so that the recovery partition contains the install.wim file as well as a custom.wim file which contains the OEM customizations, such as drivers for any special hardware. But instead of uncompressing the files and putting them into the live Windows installation, WIMBoot creates tiny little stub files in the live Windows installation that say, “Hey, um, I’m just a stub. If you want to see the contents, you want to uncompress those bytes over there.” WIMBoot therefore avoids the duplication by allowing the live Windows installation to share the disk storage with the WIM file on the recovery partition.

Furthermore, since the file contents in the WIM are compressed, this reduces disk I/O, though naturally at a cost of higher CPU usage in order to perform the decompression.

The way this magic works is that the live Windows files are formally sparse NTFS files, so that when you ask for the file size, you get the correct number, even though there is no actual data in them. When you open the file, the Windows Overlay Filter steps in and generates the data by decompressing the data in the WIM file on demand.

Unlike native NTFS file compression, the Windows Overlay Filter supports only read operations. This means that it doesn’t need to sector-align each compressed chunk,¹ so the compressed data can be packed more tightly together. If you open the file for writing,² the Windows Overlay Filter just decompresses the entire file, turning it back into a plain file.³ At the time WIMBoot was released, there was also a guidance document warning you not to run around opening files for writing unnecessarily. Not opening files for writing unnecessarily is good advice in general, but it’s particular important for WIMBoot in order to prevent unnecessary conversion.

The Windows Overlay Filter can take advantage of newer compression algorithms developed over the past 20 years, algorithms which produce better compression ratios, can be run in parallel on multiple cores, and which require less CPU and memory for decompression. It can also use algorithms tailored to the scenario: For example, it can choose algorithms where compression is expensive but decompression is cheap.

Changing the native NTFS file compression would be a disk format breaking change, which is not something taken lightly. Doing it as a filter provides much more flexibility. The downside is that if you mount the volume on a system that doesn’t support the Windows Overlay Filter, all you see is an empty file. Fortunately, WOF is used only for system-installed files, and if you are mounting the volume onto another system, it’s probably for data recovery purposes, so you’re interested in user data, not system files.

It’s called the “Windows Overlay Filter” because it “overlays” virtual files into a directory that also contains normal physical files.

When you read through the above description, you may have realized something: Whenever Windows Update updates a file, that file is converted from a virtual file to a plain uncompressed physical file because the file’s backing data is no longer in the WIM file. This means that over time, the Windows system files occupy more and more disk space as more of them no longer match the copy in the WIM and revert from their compressed form to their uncompressed form.

Windows 10 introduced a feature known as Compact OS, which takes a different approach. With Compact OS, the Windows Overlay Filter gains the ability to recompress files: Based on a hardware performance check, the system may decide to take the updated files, recompress them, store the compressed data in the Wof­Compressed­Data alternate data stream, and free the original uncompressed data using the same “sparse file” trick to make the file appear as if it were a normal file.

If you open one of these recompressed files, the file is decompressed on the fly based on data in the Wof­Compressed­Data alternate data stream. And as before, if you open one of these files for writing, then the file reverts to its uncompressed form.

Bonus chatter: You can use the Wof­Should­Compress­Binaries function to determine whether the system is using WOF to compress system files. From the command line, you can use the compact.exe program to inspect the compression state of a file, or of the system.

Oh, and going back to the customer’s original question: the system’s choice to use Windows Overlay Filter compression spends a small amount of parallel computation in order to save a small amount of I/O. It’s theoretically possible that you stumbled across a hardware configuration where the system’s automatic evaluation suggested using the Windows Overlay Filter even though it was a net performance loss. I guess that would happen if you had a really fast storage device attached to a low-end CPU, and it somehow managed to trick the the automatic evaluation into thinking that compression was a good idea. In practice, it is rather unusual to have a hardware configuration consisting of fast storage and a slow CPU.

Many thanks to Malcolm Smith for his assistance with this article.

¹ The sector alignment was necessary to permit data to be rewritten into the middle of the file. But since the Windows Overlay Filter doesn’t support writing, it doesn’t need to enforce sector alignments.

² Since these files are Windows system files, opening them for writing requires administrator access. Normal usage therefore would not trigger a full decompression.

³ This “decompress on write” behavior merely describes the current behavior and is not contractual.

The post What is WofCompressedData? Does WOF mean that Windows is a dog? appeared first on The Old New Thing.


How do I write a function that accepts any type of standard container?

$
0
0

Suppose you want a function to accept any sort of standard container.
You just want a bunch of, say, integers, and it could arrive in the form of a std::vector<int> or a std::list<int> or a std::set<int> or whatever.

I would like to take this time to point out (because everybody else is about to point this out) that the traditional way of doing this is to accept a pair of iterators. So make sure you have a two-iterator version. But you also want to make it more convenient to pass a container, too, because requiring people to pass a pair of iterators can be a hassle because you have to introduce a name and a scope.

extern std::set<int> get_the_ints();
// Convenient.
auto result = do_something_with(get_the_ints());

// Hassle.
auto the_ints = get_the_ints();
auto result = do_something_with(the_ints.begin(), the_ints.end());

Not only did you have to give a name to the set returned by get_the_ints, you now have to deal with the lifetime of that thing you just named. You probably want to destruct it right away, seeing as there’s no point hanging around to it, but that leaves you with some weird scoping issues.

{
  auto the_ints = get_the_ints();
  auto result = do_something_with(the_ints.begin(), the_ints.end());
} // destruct the_ints
// oops, I also lost the result!

If you wanted to accept anything and figure it out later, you could write

template<typename C>
auto do_something_with(C const& container)
{
  for (int v : container) { ... }
}

This takes anything at all, but if it’s not something that can be used in a ranged for statement, or if the contents of the container are not convertible to int, you’ll get a compiler error.

Maybe that’s okay, but maybe the overly-generous version conflicts with other overloads you want to offer. For example, maybe you want to let people pass anything convertible to int, and you’ll treat it as if it were a collection with a single element.

auto do_something_with(int v)
{
  ... use v ...
}

This overload looks fine, until somebody tries this:

do_something_with('x');

Now there is an ambiguous overload, because the char could match the first overload by taking C = char, or it could match the second overload via a conversion operator.

SFINAE to the rescue.

We can give the container version a second type parameter that uses SFINAE to verify that the thing is actually a container.

template<typename C, typename T = typename C::value_type>
auto do_something_with(C const& container)
{
  for (int v : container) { ... }
}

All standard containers have a member type named value_type which is the type of the thing inside the collection. We sniff for that type, and if no such type exists, then SFINAE kicks in, and that overload is removed from consideration, and we try the overload that looks for a conversion to int.

Now, it could be that you have a container that doesn’t implement value_type, but it still implements begin and end (presumably via ADL), so that the ranged for statement works. You can encode that in the SFINAE:

template<typename C,
    typename T = std::decay_t<
        decltype(*begin(std::declval<C>()))>>
auto do_something_with(C const& container)
{
  for (int v : container) { ... }
}

Starting with the type C, we use std::declval to pretend to create a value of that type, so that we can call begin on it, and then dereference the resulting iterator, and then decay it, producing a type T that represents the thing being enumerated. If any of these steps fails, say because there is no available begin, then the entire overload is discarded by SFINAE.

This was a bit of overkill because we never actually used the type T, but I kept it in because it sometimes comes in handy knowing what T is.

If you wanted to filter further to the case where the contents of the container are convertible to int, you can toss in some enable_if action:

template<typename C,
    typename T = std::decay_t<
        decltype(*begin(std::declval<C>()))>,
    typename = std::enable_if_t<
        std::is_convertible_v<T, int>>>
auto do_something_with(C const& container)
{
  for (int v : container) { ... }
}

The post How do I write a function that accepts any type of standard container? appeared first on The Old New Thing.

Getting a value from a std::variant that matches the type fetched from another variant

$
0
0

Suppose you have two std::variant objects of the same type and you want to perform some operation on corresponding pairs of types.

using my_variant = std::variant<int, double, std::string>;

bool are_equivalent(my_variant const& left,
                    my_variant const& right)
{
  if (left.index() != right.index()) return false;

  switch (left.index())
  {
  case 0:
    return are_equivalent(std::get<0>(left),
                          std::get<0>(right));
    break;

  case 1:
    return are_equivalent(std::get<1>(left),
                          std::get<1>(right));
    break;

  default:
    return are_equivalent(std::get<2>(left),
                          std::get<2>(right));
    break;
  }
}

Okay, what’s going on here?

We have a std::variant that can hold one of three possible types. First, we see if the two variants are even holding the same types. If not, then they are definitely not equivalent.

Otherwise, we check what is in the left object by switching on the index, and then check if the corresponding contents are equivalent.

In the case I needed to do this, the variants were part of a recursive data structure, so the recursive call to are_equivalent really did recurse deeper into the data structure.

There’s a little trick hiding in the default case: That case gets hit either when the index is 2, indicating that we have a std:string, or when the index is variant_npos, indicating that the variant is in a horrible state. If it does indeed hold a string, then the calls to std::get<2> succeed, and if it’s in a horrible state, we get a bad_variant_access exception.

This is tedious code to write. Surely there must be a better way.

What I came up with was to use the visitor pattern with a templated handler.

bool are_equivalent(my_variant const& left,
                    my_variant const& right)
{
  if (left.index() != right.index()) return false;

  return std::visit([&](auto const& l)
    {
      using T = std::decay_t<decltype(l)>;
      return are_equivalent(l, std::get<T>(right));
    });
}

After verifying that the indices match, we visit the variant with a generic lambda and then reverse-engineer the appropriate getter to use for the right hand side by studying the type of the thing we were given. The std::get<T> will not throw because we already validated that the types match. (On the other hand, the entire std::visit could throw if both left and right are in horrible states.)

Note that this trick fails if the variant repeats types, because the type passed to std::get is now ambiguous.

Anyway, I had to use this pattern in a few places, so I wrote a helper function:

template<typename Template, typename... Args>
decltype(auto)
get_matching_alternative(
    const std::variant<Args...>& v,
    Template&&)
{
    using T = typename std::decay_t<Template>;
    return std::get<T>(v);
}

You pass this helper the variant you have and something that represents the thing you want, and the function returns the corresponding thing from the variant. With this helper, the are_equivalent function looks like this:

bool are_equivalent(my_variant const& left,
                    my_variant const& right)
{
  if (left.index() != right.index()) return false;

  return std::visit([&](auto const& l)
    {
      return are_equivalent(l,
                   get_matching_alternative(right, l));
    });
}

I’m still not entirely happy with this, though. Maybe you can come up with something better?

The post Getting a value from a <CODE>std::variant</CODE> that matches the type fetched from another variant appeared first on The Old New Thing.

The 2019 Microsoft Giving Campaign Run/Walk comes with some ground rules

$
0
0

Today is the Microsoft Giving Campaign 5K Run/Walk, which raises money in support of the Crohn’s & Colitis Foundation, KEXP radio, Boys & Girls Clubs of Bellevue, and (new this year) Plymouth Housing.

This is a fun event, not a race, and friends and family are welcome to participate. But one of my colleagues pointed out that there are some ground rules:

To ensure that this is a safe and fun event for all participants, the following are not permitted on the race course:

  • pets (this policy does not apply to service animals)
  • skateboards or rollerblades
  • wagons
  • scooters or segways
  • office chairs
  • any other items deemed potential hazards.

You can try to guess how that fifth item got on the list.

Related: Tips for planning your ship party.

 

The post The 2019 Microsoft Giving Campaign Run/Walk comes with some ground rules appeared first on The Old New Thing.

Detecting whether the -opt flag was passed to cppwinrt.exe: Using __has_include

$
0
0

I was upgrading the Window UWP Samples repo to take advantage of the new -opt flag introduced in C++/WinRT 2.0. This provides performance improvements for accessing static class members, and avoids having to register the type in your manifest for strictly in-module consumption.

The new -opt flag enables these optimizations, but it also adds a new requirement: Your implementation file needs to #include <ClassName.g.cpp>. The problem is that I wanted to upgrade the samples one at a time, but that meant that the shared files needed to support both optimized and unoptimized builds, at least until I get them all converted.

I was at a bit of a loss, because there was no obvious #define in winrt/base.h that tells me whether the -opt flag was passed.

And then I realized: I could use __has_include.

C++17 introduced the __has_include preprocessor keyword which snoops around to determine whether a header file exists. The idea is that you could conditionalize based on whether an optional header file is present. For example, you might check for the presence of xmmintrin.h and conditionally enable SSE operations.

In my case, I wouldn’t be probing for a system header file, but rather for a generated .g.cpp file produce by cppwinrt.exe in -opt mode.

#if __has_include(<MainPage.g.cpp>)
#include <MainPage.g.cpp>
#endif

If cppwinrt.exe were run with the -opt flag, then the MainPage.g.cpp file will exist in the Generated Files directory, and I can include it. If it were run without the -opt flag, then the MainPage.g.cpp file will not exist, and I skip over it.

 

The post Detecting whether the <CODE>-opt</CODE> flag was passed to <CODE>cppwinrt.exe</CODE>: Using <CODE>__has_include</CODE> appeared first on The Old New Thing.

Why was Windows for Workgroups pejoratively nicknamed Windows for Warehouses?

$
0
0

The first version of Windows with networking support built in was Windows for Workgroups 3.10. The intended audience for this was small businesses who wanted to network their computers together into units known as workgroups. (The term persists in Windows NT as well, referring to an unmanaged collection of computers operating in a peer-to-peer manner.)

Windows for Workgroups came with a network card, instructions for installing it, and even a screwdriver to assist with the installation. Now, there were two network cable standards at the time: BNC and 10Base-T. The network card that came with Windows for Workgroups 3.10 used BNC, which turned out to be the loser in the standards battle.

As a result, there was not a lot of interest in a network card that used an unpopular cable standard. Sales for Windows for Workgroups 3.10 were weak, which led many in the Windows division to bestow upon it the pejorative nickname of Windows for Warehouses, referring to the presumption that most copies of Windows for Workgroups 3.10 existed in the form of unsold inventory in warehouses.

Windows for Workgroups 3.11 solved this problem by omitting the network card entirely. People could choose their own network card, presumably one that used a popular cable standard. It also added significant performance improvements, including an early version of the 32-bit file system that also shipped in Windows 95. This version of Windows for Workgroups was a smashing success.

But the nickname stuck. Once you get a nickname, it’s hard to shake it off.

The post Why was Windows for Workgroups pejoratively nicknamed Windows for Warehouses? appeared first on The Old New Thing.

Viewing all 24428 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>