Scripting: User Interfaces
Panels
Panel - Space Types - Region Types
Panels are located in the sidebar. They are classes derived from the Panel class and are defined by some metadata:
from bpy.types import Panel
class VIEW3D_PT_view3d_cursor(Panel):
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_category = "View"
bl_label = "Some Stuff"
def draw(self, context):
layout = self.layout # an UILayout
pass
Great StackExchange Post about Blender GUI
UI Layouts and Widgets
This is the layout of any panel or other UI window.
Important methods:
| Method | Comment | Img |
|---|---|---|
| row | Defines a row sub-layout | |
| column | Defines a column sub-layout | |
| box | Defines a column sub-layout in a box | |
| split | Defines a split-style sub-layout Split factor defines size of first item. |
|
| prop | Exposes an RNA item on the layout Various special forms exist! Type of prop defines UI style. |
|
| operator | Adds a button that executes that operator. Simple Object OP Panel |
|
| label | Displays text and/or icon in the layout | |
| menu | Adds a menu to the layout. Menu Example |
|
| seperator | Adds an empty space. | factor = 1.0 width to space type = "AUTO" ("AUTO", "SPACE", "LINE") |
Background Info
On a devtalk.blender.org topic Julian Eisel (Blender Software Developer) wrote this:
The Blender GUI is established through an interplay of multiple parts of the code. I.e. on a high level:
- GHOST – OS dependent code (windows, OpenGL context, device input, etc.)
- Window-Manager – OS independent management of windows, events, keymaps, data-change notifiers, etc.
- Interface (source/blender/editors/interface/ 447) – button drawing, button event handling, layouts, UI tools (e.g. eyedroppers), menus/popups, …
- Editors – screen-layouts (areas & regions – think of these as the sub-windows), editors (e.g. 3D View, Properties, Node Editor, …), gizmo libraries, tools for individual editors, etc. The Python scripts for layout definitions are executed as part of this too.
Of course this is a very simplified look . There are further things involved like file read/write, undo/redo, translations, context, preferences, add-ons…
As for immediate vs. retained mode (I think this relates to the GUI code, not the graphics drawing): One could argue it’s a mixture, but mostly retained. Buttons are defined, the layout engine runs, later on buttons can capture events, respond with state changes and ultimately tell our data system to update data based on that. It’s roughly a MVC design. However, buttons are still created on the fly. On each redraw, all (non-active) buttons are destructed and the layout is re-constructed almost from scratch. That way, UI definitions (e.g. the Python scripts) can conditionally place items:
if condition:
# Optionally create a button for the data.propname property.
layout.prop(data, "propname")
This is a typical immediate mode characteristic. For retained mode you’d explicitly hide an existing button.
One thing the GUI code is quite “smart” about is reducing redraws: If data changes, a notifier can be sent, that categorizes this change. Different parts of the UI can listen to these notifiers, and tag themselves for a redraw if they care about the category of data. That way, a change in the Dopesheet usually does not cause the Image Editor regions to redraw.
There are some old 2.5 docs on these designs here: https://archive.blender.org/wiki/index.php/Dev:Source/Architecture/ 243.
All of this sounds like a careful designed architecture, and in some ways it is. But much of this is historical. The UI code contains some of the most messy code I know in Blender. It’s a mixture of very old code (literally from the first days of Blender), newer 2.5 designs and years and years of hacks. Nevertheless, I think especially the 2.5 design brought useful concepts to the design. Although I think the way they were implemented is problematic in many cases.